Loading

A Starker Truth: America’s Digital Divide in the Times of Pandemic (April)

In April 2020, the Rights Writers were asked what perspectives have been left out of the major debates on their topic, and how would including them increase understanding or contribute to progress on this issue.

As I sit here in quarantine, I am taking a moment to reflect on the past semester of research and the gravity of my topic in the context of the current state of society. Each of my blog posts have explored the shifting and expanding conservation surrounding the risks to human rights associated with Big Tech. After a semester’s worth of research, I am taken aback by the industry’s undeniable power over global markets and governments and its ability to deepen inequalities across economies and social systems.

Sign that says "Sorry, no internet today"
“No Internet”, shot by Marcelo Graciolli. Source: Flickr

But COVID-19 has brought an entirely new perspective to this conversation. In poor, rural communities across the globe, individuals wake up each day, unable to report to remote work or online school. What they lack is one critical asset in this pandemic: access to the internet.

In the United States, 10% of the total population still do not have access to the internet, with over 31% of rural residents and 44% of adults in households making under $30,000 lacking broadband access. Research from the Organization for Public Knowledge in America reveals that the digital divide disproportionately impacts people of color. For while black Americans account for 15% of rural populations, they make up over 27% of the rural population without access to the internet. So, within a debate that historically has centered around the risks associated with the use and involvement of these technologies in society, the novel coronavirus has created a feedback loop of deepening inequality and human rights threats associated with exclusion from these technologies.

Whether connecting with loved ones, applying for government programs, or accessing employment and educational opportunities, households that lack access to these digital services are put in a severely disadvantageous position where they must make incredibly high-stakes decisions. For those working in the non-essential service industry, remote work isn’t an option as their businesses do not have the infrastructural means to shift online. For the individuals whose work has made the shift, the ability to work remotely is almost impossible without the internet, with feasibility decreasing the more remote and rural of an area you live in. While these barriers aren’t new, in the era of COVID-19 we see how these communities’ right to work is being compromised like never before.

It’s equally important to consider the increased risk of contraction and infection among this demographic. Studies show that low-income, rural populations that are disconnected from the internet are among the most likely to visit clinics in person and increase the risk of virus spread because of their inability to visit online clinics. With the number of individuals within this demographic losing their jobs – and therefore health insurance – due to their inability to connect, we can only expect this likelihood to grow if no action is taken. Worse yet, this demographic represents a large portion of the essential work force, meaning many of these individuals face a disproportionately high risk of contracting COVID-19 in the workspace and bringing it home to their families and communities. Rapidly, the pandemic is illustrating both the vulnerability and necessity of low-wage workers, and how the internet not only becomes a question of economic and educational opportunity, but also survival.

Snaps for work from Hoover High School as they distribute laptops to students to prepare for school going online for the rest of the year due to COVID-19. Des Moines Public Schools distributed 4,092 to high school students.
“Four Thousand and Ninety-Two,” shot by Phil Roeder, no edits made. Snaps for work from Hoover High School as they distribute laptops to students to prepare for school going online for the rest of the year due to COVID-19. Des Moines Public Schools distributed 4,092 to high school students. Source: Flickr

I consider my own privilege as I watch my Wi-Fi router blink steadily, reflecting on my ability to continue my academic pursuits because of my connectedness and the online resources provided by an institution such as Duke University and the Kenan Institute for Ethics. Because as we have seen with those no longer able to work, limited access to the internet during COVID-19 has proven destructive to children’s ability to participate in education. As of 2015, a Pew Research study revealed that school-age children are most affected by the lack of in-home internet access, with over 35% of these households without broadband service. In this time of emergency, schools must move faster than ever to substitute in-school resources with online instruction, electronic libraries, streaming videos, and other online tutorials. Many are quickly realizing that they can’t provide the same online education experience to every student when children cannot employ the imperfect solutions they have relied on until now, such as using the Wi-Fi in parking lots of fast-food restaurants, to finish homework, due to the mandated shelter-in-place orders. The technological compromise of one’s right to education brought upon many by distance learning has starkly exposed the continued disparities in the American educational system.

Between the right to work and the right to education, policy makers are being asked to look critically at the importance of expanding broadband access in the fight to reduce nation-wide disproportionality along racial, socioeconomic, and geographic lines. Human rights activists must consider how governments in other nations may use access to the internet as a mechanism to control populations and restrict liberties and freedoms. Furthermore, as more people begin to onboard online, we must question how these previously disconnected communities might find themselves at a greater risk of the data privacy and artificial intelligence biases.

Ultimately, there is still so much uncertainty with regards to COVID-19, just as there is surrounding the growth of the internet and Big Tech. Just in the past 5 months of blogging, we saw American politicians take the stand in presidential debates, proclaiming the need to regulate and provide increased oversight of tech giants in the name of protecting democracy. We also explored how well-researched reporting on data privacy and artificial intelligence – the industry’s two most contested fields – could be used to close the threatening information gap between those developing the technology and the consumers who use and are affected by it. We looked at large-scale corporate scandals that taught the importance of considering the well-being of vulnerable communities, such as developing or war-torn countries, in technology business models. Now, we are seeing how a global emergency can simultaneously bring to light some of the most insidious and damaging systemic inequalities and inspire an increased collective sense to care for one another through the lens of the internet and Big Tech.

So as COVID-19 continues to spotlight systematic inequalities in new ways, I am curious to see how societies will respond. Does this increased exposure reveal a promising opportunity for governments and corporations to take action, or is it more fitting to forecast that the government will display amnesia to these human rights issues in the eventual return to normalcy?

For now, no one can know. In the meantime, I will continue to sit here, questioning our institutions and counting my blessings.

Decoding the Mixed Messages of Technology Journalism (March)

In March 2020, the Rights Writers were asked what role has the media played in covering the topic and what effects, positive and negative, has the media had on their topic, and what role ought the media to play.

A picture of the stage at Fortune Brainstorm Tech 2016
Kevin Moloney/Fortune Brainstorm Tech, “Fortune Brainstorm Tech 2016″. Source: Flickr

“Global Deception.”

“What about Privacy? Security?”

“Beware of Hackers.”

These phrases are included within the first two pages of headlines after a quick search of “The Technology Industry” on Google News. And while the haiku above is certainly not representative of the many nuances of public opinion and global journalism on the topic of Big Tech, it does suggest two things: (1) there is a generally negative portrayal of the privilege and power of the global technology industry and (2) the content and sourcing of these articles reveal this type of journalism is largely America-centric.

It is no secret, however, that the technology industry with its enormous and unwieldy market power and social influence has faced immense scrutiny in the media for some time now. In just the past 5 years, mass news outlets including the New York Times and NPR published entire series of stories criticizing technology giants for privacy scandals and ignoring consumer well-being. Even articles announcing the release of new products end in familiar phrases like, “If this is possible, then what next?” intentionally communicate a sense of unease about technology’s rate of change. Hence, the vast scope of news reporting under the umbrella of “Technology journalism,” as it is now coined, may be characterized by a common theme that weaves all of these stories together in the mind of an informed citizen: That individuals, corporations, and governments should be wary of the risks associated with Big Technology.

Recently, political bodies, human rights groups, and other agencies have become more involved in communicating the risks of big technology to the public. From stripping individuals of their right to remain anonymous, to discussions of biased algorithms that may impact the future of policing, criminal justice systems, and national security all over the world, they argue that citizens should be informed of the diverse set of threats beyond the mass media’s c shallow coverage of data privacy and other “hot topics” of the industry. So, when a journalist decides to cover one of these more in-depth subjects, they provide well-researched organizations a platform to explain the specific threats of Big Tech to individual human rights. This kind of reporting backed by data-driven analysis is beneficial in many ways, as it connects the dots between the technology industry and human impact while also helping to overcome the problematic phenomenon of asymmetric knowledge between the consumer and the creator of the technology.

But technology journalism still has a long way to go.

As I previously mentioned, this type of journalism is largely centered on the industry and its impact in the United States. And while this may seem justified when considering the geographic placement of Silicon Valley and that 9 of the top 10 largest technology companies are American-owned, it is important to recognize that news reports on the negative impacts of this technology, such as the case of Facebook in Myanmar and other technology companies abusing human rights in developing countries, remain few and far between.

We find a case in which the media largely underrepresents and underreports on the social impact of emerging technologies in developing and vulnerable communities outside of the United States. It is this reporting gap that demonstrates both the mainstream media’s general failure to report on third-world nations and the technology industry’s disproportionate control over the stories and narratives that are released to the public, which both have immense implications for human rights. A study conducted by a number of relief aid organizations revealed the importance of international news coverage in raising attention to humanitarian crises and securing government and corporate aid. Hence, we may see how the media’s lack of coverage may contribute to the phenomenon of “forgotten humanitarian crises” in some of the world’s most vulnerable communities and how, generally, technology journalism can be used to stifle the advancement of, or even damage, human rights globally.

Anthony Quintano, “F8 2019 Stock Photos”. Source: Flickr

The media also suffers from an oversaturation of stories covering the technology. As a result, many readers are left feeling averse to investing their time in an article with yet another headline warning that citizens’ privacy is being stolen from them. Furthermore, with so many media outlets echoing similar information, readers may begin to feel disillusioned; unsure of what information is accurate and what risks of technology are really worth their concern. Certainly, I was initially unsure where to click after my Google News search that yielded over a million results in just 0.28 seconds.

The problem of disillusionment and overwhelm is exacerbated by the fact that many of the technology companies being criticized in the media have their own news reporting sites, platforms, and forums where they often rebuke many of the critical claims made by politicians or government organizations. Some examples include Apple Inc’s 9to5 Mac media platform, Microsoft’s news engine, and Tesla’s blog page. Many of these sites will flood their articles with extraneous information and advertisements that present their company and products in a positive light with the intention of transforming readers from concerned citizens into curious consumers. Researchers also point to the role of Big Tech’s social media platforms in promoting disinformation that confuses consumers about the risks of the industry. It is this confusion, combined with information overload , that leads to a compassion fatigue that further deters people from learning critical information about their rights, even when an article by an accredited source is published.

Ultimately, the question becomes how to monitor and streamline media coverage of Big Tech so that it presents the information necessary in closing the information between consumer and company, and better represents the global impact of emerging technology by including third-world nations and vulnerable communities into the conversation. Because if not addressed, the global community faces not only the risks of emerging technologies, but damages to human rights caused by the media itself. So let’s start reporting like our rights depend on it.

Ten Years of Tech and the Rise of Data Governance (February)

In February, 2020 the Rights Writers were asked to discuss how a topic has evolved throughout the past decade (2010-now) and look at the issues that have changed significantly during this time period and how these recent changes have affected current approaches to this topic from governmental and non-governmental actors.

Machine Learning Studio, 2019. Source: Creative Commons

For most of Gen Z, it’s hard, if not impossible, to recount what our lives and the world we lived in were like prior to the technological advances of the past decade.

So, let me paint a partial image.

2010 marked the release of Instagram and Apple’s first model of the iPad. A team at Google pitched the preliminary technology that is now used in self-driving cars. Flash forward to today, and we see how the “Big Tech” industry has entirely transformed our lives from globally revolutionizing economic markets to impacting entire societies. Additionally, its increasing scope and degree of influence has had subsequent implications for human rights.

But despite this high degree of advancement and expansion, the actors in control of the industry and the nature of the issues associated with it have largely remained the same since 2010. First, the largest tech companies in the world are still American-based and hold a disproportionately large share of power in the current technology revolution. Second, data regulation and technology governance still rank among the top 5 most pressing concerns for the industry, with the subsequent risks of advancing data collection practices and artificial intelligence continuing to pose threats to wellbeing of the global consumer. International news reports from the past decade reveal that the industry’s biggest data breaches and human rights violations may be explained by a fundamental and unchanged issue of the sector: that data regulation is insufficient, underinformed, and largely reactionary.

Worse yet, the risks associated with ineffective regulation magnify when considering the informational disconnect between the technology companies and the judicial bodies responsible for developing protective legislation. Without a proper understanding of the technology itself, how are legislators to properly protect users from the industry’s increasing numbers of privacy breaches, engagement with hate speech, or the collateral damage of algorithmic racial, ethnic, and/or religious biases present in the

algorithms that impact us every day? How do we legally combat the oncoming challenges of individual privacy and data collection that have already been shown to disproportionately marginalize minority groups and third world countries?

IBM Research, 2017. Source: Creative Commons

In addition to the informational divide, the relative rates at which technology and technology law have developed over the past decade may also explain the insufficient and reactionary nature of data governance. In simpler terms, this doesn’t mean that government bodies, including the United Nations and European Union, haven’t worked hard over the past decade to negotiate international policies and recommendations to address the pressing concern of Big Tech’s impact citizens across the globe. It just means that the bureaucratic and tedious process of law-making has proven systematically too slow to ensure that risks are being eliminated, or even properly managed (see previous blog posts for examples of human rights failure in Big Tech).

The past decade actually showed progress in data governance reform, culminating in the passage of one significant piece of international regulation: The General Data Protection Regulation (GDPR). This document, which was developed and approved by the European Parliament, the Council of the European Union, and the European Commission in 2016, was the first of its kind to outline how to deal with issues ranging from personal data protection, increasing sanctions on tech giants, extending the territorial scope of regulation, regulating data transfers across borders, and limiting demographic profiling. While initially created for the European Union countries, the United Nations stresses the importance of collective compliance with the GDPR in the interest of protecting human rights worldwide.

But there is still concern with regards to the nature, scope, and implementation of this regulatory advancement. 

An initial point problem is that the most comprehensive document governing data regulation today, the GDPR, is largely Euro-centric. But as we know, American-based companies dominate the vast majority of the global technology market, which leaves questions surrounding the applicability of these policies in a US-centric technology industry and how this may impact international cohesion.

Another problem is how to write legislation that balances both the elimination of negative human rights impact caused by data sharing, artificial intelligence, and technological products, without stifling their positive benefits to society. Because while the industry has developed under growth-driven business schemes and undermined consumer ethics, the technology itself shows a vast ability to empower sectors including healthcare, education, social work, and philanthropy in the interest of human and civil rights. From artificial intelligence assisting in finding cures to rare diseases and detecting mass wildfires, to empowering school systems and increasing the quality of baseline education, the impact of the industry is truly remarkable so long as it is in the hands of companies that, at the very minimum, have consumer well-being included in their business models.

But realistically, these socially-conscious companies only make up a miniscule portion of the market, while the leading tech giants continue to fail to consider the well-being of the consumer or their impact on vulnerable populations, despite GDPR recommendations.

The most recent example of this was discussed in the episode of the New York Times podcast, “The Daily,” called, “The End of Privacy as We Know It?” Here, we learn of a new company called “Clearview AI,” that has the potential to entirely revolutionize facial recognition technology. According to industry experts, public exposure of the tool “would be the end of a person being anonymous in public,” posing risks to citizens’ right to privacy and right to be forgotten. The owner and developer of the software, Hoan Ton-That, currently has no plan for how the company plans manage the dangerous the risks of such a powerful technology, and there is no federal or international legislation that can stop Ton-That from selling it to the public if he decides he wants to.

So, ultimately, data regulation is facing the same issues of underpreparedness and lack of cohesion across industry, government, and nation that it was ten years ago. Only today, with more powerful technology and more significant risks to human rights at stake. Therefore, we are left to question: when will the breakthrough be for effective regulation?

The Year 2020: Where Data meets Democracy (January)

In January, 2020 the Rights Writers were asked to discuss an issue in the context of US political discourse (including public opinion, if desired) – is any relevant legislation being debated? How are different branches of US government engaged with your topic? Consider particularly the 2020 presidential race.

 

Photo of US Capitol Building
“US Capital,” by Patrick Thibodeau
Picture of iPhone
“Hackers,” by Thought Catalog

Looking back on the previous decade, we may see how the industry of data and technology redefined the way governments, businesses, and societies operate. The year 2020, however, will be no exception to the pattern of unprecedented growth for industry, commonly referred to as “Big Tech,” in the United States.

By 2017, American technology corporations’ comparative competitive strength and sophistication in domestic and global markets made the United States the top-ranked nation for technological advancement through data. Since, the industry has impacted millions of American households by revolutionizing domestic sectors including banking and finance, media and entertainment, healthcare, agriculture, and online retail, and will be responsible for the creation of over 6 million U.S. jobs in the next four years.

But while the economy will continue to praise the increasing efficiency and capital generation of American technology giants and their products, critics point out that its degree of power and lack of regulation are, in many ways, detrimental to the privacy and wellbeing of its consumers. There is an increasing pressure to consider at what risk — to our rights as citizens, individuals, and humans — is the growth of Big Tech. And despite its seemingly unwieldy degree of power and associated risk for individuals globally, the United States has no federal law that controls companies’ development of technology or how they collect and monetize web data.

As humanitarian issues associated with Big Tech only become more complex, we must consider how technology corporations’ collection and use of data jeopardize the universally-recognized human right to privacy, as well as related but uncodified rights including the right to be forgotten. We must consider the police forces that have used artificial technology to disproportionately identify and accuse people of color and the web algorithms that target consumers along demographic lines to understand it’s impact on some of our nation’s most vulnerable communities.

Moreover, the rapid rate of developing technology and the fact that those developing the technology represent only a minute and highly-expertise proportion of the population that actually consumes the technology generates an issue of asymmetric information between tech giants, the government, and consumers that poses a serious problem to those tasked with developing the legislation needed to protect individual rights.

So we must ask ourselves: what is currently being done to make sure our rights are protected?

To date, international organizations including the United Nations and the International Electrotechnical Commission have offered extensive corporate social responsibility guidelines for the technology industry, tailored particularly for American technology giants who have shown their power to negatively impact vulnerable communities across the globe  (see: Facebook’s tragedy in Myanmar or Whatsapp’s Aadhaar system breach in India). While these guidelines are informative and comprehensive, they have no legal standing to hold these powerful companies accountable for their actions or the collateral social damage of their products or software.

Some cities in the United States, however, have implemented local laws in response to public discontent with technology corporations’ “unconstitutional” business conduct. In 2019, the city of San Francisco passed a mandate that banned the use of facial recognition software by the police and other agencies following  public outcries over the use of AI to identify people in public spaces without their consent,  making it the first major American city to block mass use of a technological tool.

Select states have also attempted to pass bills addressing issues including consumer protection and privacy. Some of the most notable include the California Consumer Privacy Act of 2018, the Washington Privacy Act of 2019, and the South Carolina Insurance Data Security Model Law. All of these policy efforts, however, ultimately collapsed under pressure exerted by lobbying groups connected to Facebook and Google that were fighting against the restriction of data collection.

Which leads me to my next question: how can the federal government work with Big Tech to effectively protect the rights of citizens across the nation?

According to recent criticism of the Trump Administration from a variety of foriegn leadership, including the EU, it begins with immediate legal action to implement strict federal technology and data rules.

In the past year, America has experienced a crescendo of political activism surrounding data regulation and heightened media exposure of Big Tech scandals that led the president to announce his administration’s intention to craft a proposal to protect web users’ privacy and deflect blame that the “United States ha[d] enable[d] data mishaps” associated with major human rights abuses. Other 2019 federal advancements included the writing of the first law that will allow consumers to opt out of automatic data collection and the creation of a new digital privacy bill that would work with the Federal Trade Commission to enforce consumer rights.

But despite all of this recent momentum for reform, I still have great pause.

Because if you consider how slowly the passage of federal legislation is in the United States, how are policymakers and government going to create appropriate legislation if technology companies continue continue to harness data and develop technologies at an ever-exponential rate?

To help answer this question, we may start by turning towards the Presidential race. Though Trump’s political initiative surrounding Big Tech largely remains largely reactive, some of the democratic candidates have specified how they intend to deal with America’s expanding technology enterprise. Ranging from full structural reform to minimal engagement with private corporation, here’s what the top-ranking candidates have to say:

  • Joe Biden: calls for a moderate and comprehensive approach to developing data regulation policy.
  • Elizabeth Warren: calls for mass structural reform and strict regulation of large data corporations, and the breaking up of American technology giants including Facebook and Google.
  • Bernie Sanders: calls for strict regulation and breaking up of large data corporations and the insurance of free and open internet.
  •  Pete Buttigieg: calls for a “spectrum” of regulation on Big Tech.
  • Andrew Yang: indicates data regulation as a large component of his platform, claiming it as a right for American people to stay informed and be guaranteed their privacy.
  • Michael Bloomberg: says structural change of Big Technology is not the answer, but consumer protection is a priority.

Click the names of the remaining candidates to read on their platforms on Data Regulation and Big Tech: Amy Klobuchar, Tulsi Gabbard, Tom Steyer, Michael Bennet, Deval Patrick, John Delaney.

Here’s to a new year of protecting human rights through bridging the information and legislative gap between data and democracy.

A Table for Two, Please: The Importance of Humanizing the Technology Revolution (December)

In December 2019, the Rights Writers introduced themselves and their general topic – who are the key actors, what are their goals/incentives, and what are the main debates? (How does the topic relate to human rights specifically?)

Human rights and data & technology are topics that when introduced together in conversation, friends and peers ask how I came to be interested in two seemingly separate fields.

“They’re more interrelated than you think,” I always say, explaining how data and technology are not only responsible for advancing economies and connecting individuals to people, information, and industry like never before, but their power and influence raise questions about their human rights impact on some of society’s most vulnerable groups.

As a Public Policy Major and a part of the Human Rights Certificate program, I tailored my academic pursuits to studying the various American and international institutions that impact our rights as citizens. In a project-based data science course my sophomore year, I found myself considering how the hugely debated topics of data privacy and “big technology” were being spoken through the lens of human rights and social impact. As technology giants were rapidly growing capacities and user bases, I questioned what measures were being taken to protect users’ privacy and well-being. What mechanisms – whether legal, operational, or empirical –were in place to hold these companies, and the industry as a whole, accountable?

Social Media logos on pills
Source: https://commons.wikimedia.org/wiki/File:Social_Media_Addiction.jpg

Today, the most broadly debated areas of concern for big technology include the use and power of social media platforms and data collection by governments and corporations. With regards to artificial intelligence and machine learning, concerns of intrinsic bias within their algorithms that have the capacity to discriminate along socioeconomic and racial lines raise questions surrounding the ramifications of this technology as an increasingly fundamental tool in society.

This concept of accountability presents itself as an even more critical component of the conversation when considering the philosophies for ingenuity and development that the technology industry has operated under for the past decade and a half.

“Move fast and break things,”  Mark Zuckerburg said famously back in 2010 at the beginning of Facebook’s rise as social media empire.

This quote, whose big block letters came to line the walls of the company’s Silicon Valley headquarters, served to inform internal business operations and design through the prioritization of rapid innovation and technological “inspiration”; themes which would dictate the broader culture of “Big Technology” for years to come. However, case studies around the world have demonstrated negative human rights impact under these philosophies, demonstrating how individuals, communities, and in some cases, entire societies, have become the collateral damage of these company’s success in developing – and breaking things – rapidly.

National and international conflicts that may be traced to American technology companies indicate how this tunnel-vision approach largely fails to account for the safety and well-being of the societies in which the industry operates. In 2018 alone, American-based businesses engaged in the following scandals, each with intense and far-reaching human impact:

  • Facebook provided Cambridge Analytica — a data firm used by President Donald Trump’s 2016 campaign targeting voters — with 87 million users’ personal information without proper consent.
  • WhatsApp was identified as a “hotbed of misinformation,” due to influencing political elections and instigating political and civilian violence over censorship and propaganda.
  • Facebook’s platform in Southeast Asian nations, including Myanmar, was used by insurgent military groups to promote hate speech and propaganda against the country’s Muslim minority and directly linked to civilian violence and deaths in the region.
  • The IBM, known for developing artificial intelligence, was found to be collecting the data of over 1-million photos of individuals online using face-recognition technology without proper consent.

These are only a few examples of large-scale issues that demonstrate the contemporary dichotomy of technology’s ever-growing power, and corporations and governments’ insufficient guidance for how to properly regulate to protect the individual.

Critics of modern-day regulation, however, point not only to the general insufficiency of regulation, but also to the complexity and disjointedness of existing regulatory law. Currently, data privacy and corporate social responsibility law in the United States may vary according to state and municipality. And with regard to American-made technology, there is no codified language to dictate US technological engagement in markets across borders and there are no systems through which countries are able to engage in conversations on how to advance data ethics or consumer protection.

But do note, there have been some political initiatives to offer protections amidst the technology industry’s exponentially rapid development. Though not codified, the International Bill of Human Rights and the principles concerning fundamental rights set out in the International Labor Organization’s Declaration on Fundamental Principles offer broad guidelines for corporations to understand what ethical standards should be implemented in developed and developing countries. With regards to current legal regulation, the European Union passed the General Data Protection Regulation (GDPR) as a part of the body’s official code in May of 2018, which outlined guidelines requiring that companies grant users control over their personal data.

These guidelines and laws, however, have yet to see widespread success upon implementation. A 2018 study done by NTT Security reported how relevant technology companies raised concerns that they would not be compliant at the start date of the law going into effect and further explained how the companies’ key decision makers knew very little about the GDPR or how it would impact their businesses.

The general lack of understanding and disconnect amongst lawmakers and technology corporations points to one of the most critical issues responsible for the slow and cumbersome development of data and technology regulation law: the phenomenon of asymmetric information. Just as these companies were unsure of how to shift operations to embody the new “ethically correct” guidelines in the case of the GDPR, lawmakers across the globe don’t understand the underlying technology and machine learning algorithms to the extent that the software engineers do. This leads to another broad set of concerns over how to properly educate the relevant actors to ensure the voices of the people who have the relevant knowledge and who are most impacted by big technology have a seat at the table. But the first step is to understand where that table is, and how to get people to it.

As a Rights Writer with the Kenan Institute for Ethics’ Global Human Rights Scholars Program, I look forward to exploring this subject as a means of better understanding how to bridge the gap between government and big technology. Because ultimately, companies involved in the current technology revolution have the power to serve as an incredible advancer or abuser of human rights and in the spirit of promoting social justice on a global scale, we must begin to question and talk about the institutions that have presented themselves as threats to human rights and social wellbeing.