Loading

Ten Years of Tech and the Rise of Data Governance (February)

In February, 2020 the Rights Writers were asked to discuss how a topic has evolved throughout the past decade (2010-now) and look at the issues that have changed significantly during this time period and how these recent changes have affected current approaches to this topic from governmental and non-governmental actors.

Machine Learning Studio, 2019. Source: Creative Commons

For most of Gen Z, it’s hard, if not impossible, to recount what our lives and the world we lived in were like prior to the technological advances of the past decade.

So, let me paint a partial image.

2010 marked the release of Instagram and Apple’s first model of the iPad. A team at Google pitched the preliminary technology that is now used in self-driving cars. Flash forward to today, and we see how the “Big Tech” industry has entirely transformed our lives from globally revolutionizing economic markets to impacting entire societies. Additionally, its increasing scope and degree of influence has had subsequent implications for human rights.

But despite this high degree of advancement and expansion, the actors in control of the industry and the nature of the issues associated with it have largely remained the same since 2010. First, the largest tech companies in the world are still American-based and hold a disproportionately large share of power in the current technology revolution. Second, data regulation and technology governance still rank among the top 5 most pressing concerns for the industry, with the subsequent risks of advancing data collection practices and artificial intelligence continuing to pose threats to wellbeing of the global consumer. International news reports from the past decade reveal that the industry’s biggest data breaches and human rights violations may be explained by a fundamental and unchanged issue of the sector: that data regulation is insufficient, underinformed, and largely reactionary.

Worse yet, the risks associated with ineffective regulation magnify when considering the informational disconnect between the technology companies and the judicial bodies responsible for developing protective legislation. Without a proper understanding of the technology itself, how are legislators to properly protect users from the industry’s increasing numbers of privacy breaches, engagement with hate speech, or the collateral damage of algorithmic racial, ethnic, and/or religious biases present in the

algorithms that impact us every day? How do we legally combat the oncoming challenges of individual privacy and data collection that have already been shown to disproportionately marginalize minority groups and third world countries?

IBM Research, 2017. Source: Creative Commons

In addition to the informational divide, the relative rates at which technology and technology law have developed over the past decade may also explain the insufficient and reactionary nature of data governance. In simpler terms, this doesn’t mean that government bodies, including the United Nations and European Union, haven’t worked hard over the past decade to negotiate international policies and recommendations to address the pressing concern of Big Tech’s impact citizens across the globe. It just means that the bureaucratic and tedious process of law-making has proven systematically too slow to ensure that risks are being eliminated, or even properly managed (see previous blog posts for examples of human rights failure in Big Tech).

The past decade actually showed progress in data governance reform, culminating in the passage of one significant piece of international regulation: The General Data Protection Regulation (GDPR). This document, which was developed and approved by the European Parliament, the Council of the European Union, and the European Commission in 2016, was the first of its kind to outline how to deal with issues ranging from personal data protection, increasing sanctions on tech giants, extending the territorial scope of regulation, regulating data transfers across borders, and limiting demographic profiling. While initially created for the European Union countries, the United Nations stresses the importance of collective compliance with the GDPR in the interest of protecting human rights worldwide.

But there is still concern with regards to the nature, scope, and implementation of this regulatory advancement. 

An initial point problem is that the most comprehensive document governing data regulation today, the GDPR, is largely Euro-centric. But as we know, American-based companies dominate the vast majority of the global technology market, which leaves questions surrounding the applicability of these policies in a US-centric technology industry and how this may impact international cohesion.

Another problem is how to write legislation that balances both the elimination of negative human rights impact caused by data sharing, artificial intelligence, and technological products, without stifling their positive benefits to society. Because while the industry has developed under growth-driven business schemes and undermined consumer ethics, the technology itself shows a vast ability to empower sectors including healthcare, education, social work, and philanthropy in the interest of human and civil rights. From artificial intelligence assisting in finding cures to rare diseases and detecting mass wildfires, to empowering school systems and increasing the quality of baseline education, the impact of the industry is truly remarkable so long as it is in the hands of companies that, at the very minimum, have consumer well-being included in their business models.

But realistically, these socially-conscious companies only make up a miniscule portion of the market, while the leading tech giants continue to fail to consider the well-being of the consumer or their impact on vulnerable populations, despite GDPR recommendations.

The most recent example of this was discussed in the episode of the New York Times podcast, “The Daily,” called, “The End of Privacy as We Know It?” Here, we learn of a new company called “Clearview AI,” that has the potential to entirely revolutionize facial recognition technology. According to industry experts, public exposure of the tool “would be the end of a person being anonymous in public,” posing risks to citizens’ right to privacy and right to be forgotten. The owner and developer of the software, Hoan Ton-That, currently has no plan for how the company plans manage the dangerous the risks of such a powerful technology, and there is no federal or international legislation that can stop Ton-That from selling it to the public if he decides he wants to.

So, ultimately, data regulation is facing the same issues of underpreparedness and lack of cohesion across industry, government, and nation that it was ten years ago. Only today, with more powerful technology and more significant risks to human rights at stake. Therefore, we are left to question: when will the breakthrough be for effective regulation?

All posts by