Loading

The Year 2020: Where Data meets Democracy (January)

In January, 2020 the Rights Writers were asked to discuss an issue in the context of US political discourse (including public opinion, if desired) – is any relevant legislation being debated? How are different branches of US government engaged with your topic? Consider particularly the 2020 presidential race.

 

Photo of US Capitol Building
“US Capital,” by Patrick Thibodeau
Picture of iPhone
“Hackers,” by Thought Catalog

Looking back on the previous decade, we may see how the industry of data and technology redefined the way governments, businesses, and societies operate. The year 2020, however, will be no exception to the pattern of unprecedented growth for industry, commonly referred to as “Big Tech,” in the United States.

By 2017, American technology corporations’ comparative competitive strength and sophistication in domestic and global markets made the United States the top-ranked nation for technological advancement through data. Since, the industry has impacted millions of American households by revolutionizing domestic sectors including banking and finance, media and entertainment, healthcare, agriculture, and online retail, and will be responsible for the creation of over 6 million U.S. jobs in the next four years.

But while the economy will continue to praise the increasing efficiency and capital generation of American technology giants and their products, critics point out that its degree of power and lack of regulation are, in many ways, detrimental to the privacy and wellbeing of its consumers. There is an increasing pressure to consider at what risk — to our rights as citizens, individuals, and humans — is the growth of Big Tech. And despite its seemingly unwieldy degree of power and associated risk for individuals globally, the United States has no federal law that controls companies’ development of technology or how they collect and monetize web data.

As humanitarian issues associated with Big Tech only become more complex, we must consider how technology corporations’ collection and use of data jeopardize the universally-recognized human right to privacy, as well as related but uncodified rights including the right to be forgotten. We must consider the police forces that have used artificial technology to disproportionately identify and accuse people of color and the web algorithms that target consumers along demographic lines to understand it’s impact on some of our nation’s most vulnerable communities.

Moreover, the rapid rate of developing technology and the fact that those developing the technology represent only a minute and highly-expertise proportion of the population that actually consumes the technology generates an issue of asymmetric information between tech giants, the government, and consumers that poses a serious problem to those tasked with developing the legislation needed to protect individual rights.

So we must ask ourselves: what is currently being done to make sure our rights are protected?

To date, international organizations including the United Nations and the International Electrotechnical Commission have offered extensive corporate social responsibility guidelines for the technology industry, tailored particularly for American technology giants who have shown their power to negatively impact vulnerable communities across the globe  (see: Facebook’s tragedy in Myanmar or Whatsapp’s Aadhaar system breach in India). While these guidelines are informative and comprehensive, they have no legal standing to hold these powerful companies accountable for their actions or the collateral social damage of their products or software.

Some cities in the United States, however, have implemented local laws in response to public discontent with technology corporations’ “unconstitutional” business conduct. In 2019, the city of San Francisco passed a mandate that banned the use of facial recognition software by the police and other agencies following  public outcries over the use of AI to identify people in public spaces without their consent,  making it the first major American city to block mass use of a technological tool.

Select states have also attempted to pass bills addressing issues including consumer protection and privacy. Some of the most notable include the California Consumer Privacy Act of 2018, the Washington Privacy Act of 2019, and the South Carolina Insurance Data Security Model Law. All of these policy efforts, however, ultimately collapsed under pressure exerted by lobbying groups connected to Facebook and Google that were fighting against the restriction of data collection.

Which leads me to my next question: how can the federal government work with Big Tech to effectively protect the rights of citizens across the nation?

According to recent criticism of the Trump Administration from a variety of foriegn leadership, including the EU, it begins with immediate legal action to implement strict federal technology and data rules.

In the past year, America has experienced a crescendo of political activism surrounding data regulation and heightened media exposure of Big Tech scandals that led the president to announce his administration’s intention to craft a proposal to protect web users’ privacy and deflect blame that the “United States ha[d] enable[d] data mishaps” associated with major human rights abuses. Other 2019 federal advancements included the writing of the first law that will allow consumers to opt out of automatic data collection and the creation of a new digital privacy bill that would work with the Federal Trade Commission to enforce consumer rights.

But despite all of this recent momentum for reform, I still have great pause.

Because if you consider how slowly the passage of federal legislation is in the United States, how are policymakers and government going to create appropriate legislation if technology companies continue continue to harness data and develop technologies at an ever-exponential rate?

To help answer this question, we may start by turning towards the Presidential race. Though Trump’s political initiative surrounding Big Tech largely remains largely reactive, some of the democratic candidates have specified how they intend to deal with America’s expanding technology enterprise. Ranging from full structural reform to minimal engagement with private corporation, here’s what the top-ranking candidates have to say:

  • Joe Biden: calls for a moderate and comprehensive approach to developing data regulation policy.
  • Elizabeth Warren: calls for mass structural reform and strict regulation of large data corporations, and the breaking up of American technology giants including Facebook and Google.
  • Bernie Sanders: calls for strict regulation and breaking up of large data corporations and the insurance of free and open internet.
  •  Pete Buttigieg: calls for a “spectrum” of regulation on Big Tech.
  • Andrew Yang: indicates data regulation as a large component of his platform, claiming it as a right for American people to stay informed and be guaranteed their privacy.
  • Michael Bloomberg: says structural change of Big Technology is not the answer, but consumer protection is a priority.

Click the names of the remaining candidates to read on their platforms on Data Regulation and Big Tech: Amy Klobuchar, Tulsi Gabbard, Tom Steyer, Michael Bennet, Deval Patrick, John Delaney.

Here’s to a new year of protecting human rights through bridging the information and legislative gap between data and democracy.

All posts by