A Table for Two, Please: The Importance of Humanizing the Technology Revolution (December)
In December 2019, the Rights Writers introduced themselves and their general topic – who are the key actors, what are their goals/incentives, and what are the main debates? (How does the topic relate to human rights specifically?)
Human rights and data & technology are topics that when introduced together in conversation, friends and peers ask how I came to be interested in two seemingly separate fields.
“They’re more interrelated than you think,” I always say, explaining how data and technology are not only responsible for advancing economies and connecting individuals to people, information, and industry like never before, but their power and influence raise questions about their human rights impact on some of society’s most vulnerable groups.
As a Public Policy Major and a part of the Human Rights Certificate program, I tailored my academic pursuits to studying the various American and international institutions that impact our rights as citizens. In a project-based data science course my sophomore year, I found myself considering how the hugely debated topics of data privacy and “big technology” were being spoken through the lens of human rights and social impact. As technology giants were rapidly growing capacities and user bases, I questioned what measures were being taken to protect users’ privacy and well-being. What mechanisms – whether legal, operational, or empirical –were in place to hold these companies, and the industry as a whole, accountable?
Today, the most broadly debated areas of concern for big technology include the use and power of social media platforms and data collection by governments and corporations. With regards to artificial intelligence and machine learning, concerns of intrinsic bias within their algorithms that have the capacity to discriminate along socioeconomic and racial lines raise questions surrounding the ramifications of this technology as an increasingly fundamental tool in society.
This concept of accountability presents itself as an even more critical component of the conversation when considering the philosophies for ingenuity and development that the technology industry has operated under for the past decade and a half.
“Move fast and break things,” Mark Zuckerburg said famously back in 2010 at the beginning of Facebook’s rise as social media empire.
This quote, whose big block letters came to line the walls of the company’s Silicon Valley headquarters, served to inform internal business operations and design through the prioritization of rapid innovation and technological “inspiration”; themes which would dictate the broader culture of “Big Technology” for years to come. However, case studies around the world have demonstrated negative human rights impact under these philosophies, demonstrating how individuals, communities, and in some cases, entire societies, have become the collateral damage of these company’s success in developing – and breaking things – rapidly.
National and international conflicts that may be traced to American technology companies indicate how this tunnel-vision approach largely fails to account for the safety and well-being of the societies in which the industry operates. In 2018 alone, American-based businesses engaged in the following scandals, each with intense and far-reaching human impact:
- Facebook provided Cambridge Analytica — a data firm used by President Donald Trump’s 2016 campaign targeting voters — with 87 million users’ personal information without proper consent.
- WhatsApp was identified as a “hotbed of misinformation,” due to influencing political elections and instigating political and civilian violence over censorship and propaganda.
- Facebook’s platform in Southeast Asian nations, including Myanmar, was used by insurgent military groups to promote hate speech and propaganda against the country’s Muslim minority and directly linked to civilian violence and deaths in the region.
- The IBM, known for developing artificial intelligence, was found to be collecting the data of over 1-million photos of individuals online using face-recognition technology without proper consent.
These are only a few examples of large-scale issues that demonstrate the contemporary dichotomy of technology’s ever-growing power, and corporations and governments’ insufficient guidance for how to properly regulate to protect the individual.
Critics of modern-day regulation, however, point not only to the general insufficiency of regulation, but also to the complexity and disjointedness of existing regulatory law. Currently, data privacy and corporate social responsibility law in the United States may vary according to state and municipality. And with regard to American-made technology, there is no codified language to dictate US technological engagement in markets across borders and there are no systems through which countries are able to engage in conversations on how to advance data ethics or consumer protection.
But do note, there have been some political initiatives to offer protections amidst the technology industry’s exponentially rapid development. Though not codified, the International Bill of Human Rights and the principles concerning fundamental rights set out in the International Labor Organization’s Declaration on Fundamental Principles offer broad guidelines for corporations to understand what ethical standards should be implemented in developed and developing countries. With regards to current legal regulation, the European Union passed the General Data Protection Regulation (GDPR) as a part of the body’s official code in May of 2018, which outlined guidelines requiring that companies grant users control over their personal data.
These guidelines and laws, however, have yet to see widespread success upon implementation. A 2018 study done by NTT Security reported how relevant technology companies raised concerns that they would not be compliant at the start date of the law going into effect and further explained how the companies’ key decision makers knew very little about the GDPR or how it would impact their businesses.
The general lack of understanding and disconnect amongst lawmakers and technology corporations points to one of the most critical issues responsible for the slow and cumbersome development of data and technology regulation law: the phenomenon of asymmetric information. Just as these companies were unsure of how to shift operations to embody the new “ethically correct” guidelines in the case of the GDPR, lawmakers across the globe don’t understand the underlying technology and machine learning algorithms to the extent that the software engineers do. This leads to another broad set of concerns over how to properly educate the relevant actors to ensure the voices of the people who have the relevant knowledge and who are most impacted by big technology have a seat at the table. But the first step is to understand where that table is, and how to get people to it.
As a Rights Writer with the Kenan Institute for Ethics’ Global Human Rights Scholars Program, I look forward to exploring this subject as a means of better understanding how to bridge the gap between government and big technology. Because ultimately, companies involved in the current technology revolution have the power to serve as an incredible advancer or abuser of human rights and in the spirit of promoting social justice on a global scale, we must begin to question and talk about the institutions that have presented themselves as threats to human rights and social wellbeing.