Loading

“Tech Ethics/Corporate Ethics” Dinner Roundtable Conversation

The digital economy is increasingly introducing technologically-derived threats to security: threats to data privacy and information security; hacking and data breaches; cyberattacks; and new forms of cyberwarfare and information warfare, such as disinformation campaigns by domestic and foreign entities. Some experts have called for greater oversight of tech companies, and more robust general data privacy laws and data protection regulation. Other experts have called for more cooperative regulatory relationships between the public and private sectors. The promotion of corporate ethical norms and practices have been considered critical in supporting successful self-regulation models within the tech industry.


Roundtable Conversation|

Technically Right at the Kenan Institute for Ethics is pleased to host a Dinner Roundtable on the topic of “Tech Ethics/Corporate Ethics” at 5:30 pm on Monday, November 11, in the Ahmadieh Family Conference Room (West Duke Building, room 101), located in the Kenan Institute for Ethics on East Campus. The event will be cosponsored by the Future of Privacy Forum and the Duke Law and Technology Review. Members of the Duke and Durham community are welcome to join a dinner conversation that will be facilitated by Margaret Hu, Kenan Institute for Ethics, with opening comments and questions framed by David Hoffman, Director of Security Policy and Global Privacy, Intel Corporation; and Jules Polonetsky, CEO of the Future of Privacy Forum.

Please RSVP to Jeremy Buotte <jeremy.buotte@duke.edu>. SCROLL DOWN FOR PARKING INFORMATION (download parking map PDF).

BIOS|

Jules Polonetsky serves as CEO of the Future of Privacy Forum, a Washington, D.C.-based non-profit organization that serves as a catalyst for privacy leadership and scholarship, advancing principled data practices in support of emerging technologies. FPF is supported by the chief privacy officers of more than 130 leading companies, several foundations, as well as by an advisory board comprised of the country’s leading academics and advocates. FPF’s current projects focus on Big Data, Mobile, Location, Apps, the Internet of Things, Wearables, De-Identification, Connected Cars and Student Privacy. Jules previous roles have included serving as Chief Privacy Officer at AOL and before that at DoubleClick, as Consumer Affairs Commissioner for New York City, as an elected New York State Legislator and as a congressional staffer, and as an attorney.

 

David Hoffman is Director of Security Policy and Global Privacy Officer at Intel Corporation, in which capacity he covers Intel’s privacy compliance activities, legal support for privacy and security and external privacy and security policy engagements.
Mr. Hoffman serves on the Department of Homeland Security’s Data Privacy and Integrity Advisory Committee and the Board of Directors of the National Cyber Security Alliance. Mr. Hoffman has also served on the US Federal Trade Commission’s Online Access and Security Committee, the Center for Strategic and International Studies Cyber Security Commission, the Steering Committee for BBBOnline, the TRUSTe Board of Directors, and the Board of the International Association of Privacy Professionals. Mr. Hoffman has a JD from The Duke University School of Law, where he was a Member of the Duke Law Journal. Mr. Hoffman also received an AB from Hamilton College.

 



Technically Right advances ethical tech policy and innovation through interdisciplinary research, coursework for undergraduates and graduate students, and convenings of scholars and practitioners.

DC Public Library Presents: Privacy 101

Ever want to browse the black market online? Or are you just interested in keeping your browsing history private from everyone? Then the DC Public Library’s 10-day series on government transparency and personal privacy is the place for you.

The series, cleverly titled “Orwellian America,” brings together a variety of documentary screenings, live readings, and workshops – all intended to inform the general public about their privacy (or lack thereof) in today’s digital age. Notable events include a seminar about accessing public government information, a live marathon reading of George Orwell’s 1984, and a lesson on using the Tor browser to protect your online privacy. All in all, it appears to be a thought-provoking program, particularly given the hacking and tracking we hear so much about on the news these days.

While the program’s content may not be entirely groundbreaking for a public library – a quick search reveals that the Denver public library holds a similar workshop – what does surprise me is the location. Right down the road from Congress and a few miles from the NSA, the DC library will teach people how to use a browser known mostly for its obscurity and its use for buying illegal goods.

Some people may criticize the program for teaching “bad” people to hide themselves from committing crimes online, and others may criticize the program for contributing to fear mongering and an unhealthy distrust of our government.* However, I think the most interesting issue this program reminds us to think about is: do we consider the Internet to be a public or private space? What should it be, and how should we expect to be treated within it?

In many ways, I believe that a large majority of people (myself included) treat their online access like a private terminal to outside information – the equivalent of being inside your own home and looking outside at interesting things, with the occasional “post” equivalent to inviting others inside to see a poster hanging on the wall. In this analogy, deleting information from your Facebook profile or Twitter feed seems like it should be permanent, equal to taking down that hanging poster so that no one can see it anymore. Unfortunately, we know that the Internet is written in pen, not pencil, and that the digital trail can sometimes never be erased.

What we also know through whistleblowers and leakers is that the U.S. government has been secretly compiling these digital trails, going as far to collect metadata not only for our Internet activity but also for our phone calls. If it already seemed unsettling for other people to be taking pictures and recording all of the posters we hang, then it is surely even more unsettling for our government to be doing the same without letting us know.

If the Internet is a private space, then it seems like all of this watching and recording is an invasion of our agreed privacy. But what if the Internet is a public space? What inherent level of privacy should we expect, and does the level of surveillance depend on what the government does with the information?

Frustratingly, it seems impossible to determine whether the U.S. government’s surveillance produces a net good or net bad. To do so would require comparing things like the lives saved from the thwarting of terrorist attacks with things like the lives crippled by false positives and a general lack of privacy (which I am assuming is a positive attribute that most people want). More frustratingly, maybe we should expect an inherent level of privacy no matter what, just like the way we expect public bathrooms to be free from surveillance cameras. Even then, we have the tough task of determining what (if anything) is the online equivalent to walking in to a toilet stall.

The DC Public Library’s “Orwellian America” reminds me that I know little about my online self and that I know even less about how to form expectations for online privacy in the grand scheme of things. Here is this public institution teaching people to stay more private from public surveillance on channels of questionable privacy in a network with ambiguous public/ private expectations. Confusing. In any case, the public library seems like an appropriate place to start doing some learning.

* Given that everything included in the program is already free and publicly available, you could easily argue that the library won’t be teaching you anything couldn’t already do in, say, a library. You could also argue that the public currently has an unhealthy trust in our government’s respect for our online privacy.

Google and Internet Freedom Part II (It Could Be Worse)

Yes, Google currently holds power of regulating speech through YouTube.  And yes, Google shapes the way they control speech by using the American ideal of free speech.  Their policy is designed to give Google a very limited approach to regulation.  In fact, one could argue that since they follow other governments’ laws, other nations are actually the checks and balances for this company.  Whether they should have this power is irrelevant, because it already lies in their hands.  What is worrisome is how a government or a company decides to regulate their power of speech.

Recently the video, The Innocence of Muslim was tied to the violence occurring in Libya and other countries in the Middle East and Northern Africa, as Grace posted about earlier this week.  YouTube hosted the video, but decided to take down the video in Egypt and Libya even though they had already determined that it did not violate their terms of service.  Why did Google decide to violate its normal ways of regulating YouTube?  They issued a statement saying these were extenuating circumstances.  In this case, the fact that violence was tied specifically to this video shows that Google tried to make the situation better with the options that were available to them.  Other countries, including the U.S., requested that Google remove the video from YouTube, and were denied. Numerous countries that made this request did not have any violence occurring that was tied to the video. Not to mention, Google rarely ever complies with such requests, so any acquiescence would have been unusual. If Google had complied, their role in regulation would increase, which evidently Google wants to prevent.

Other videos exhibiting acts of violence like the video showing the former U.S. Ambassador to Libya moments before his death have also not been taken down.  You may wonder if this maybe classifies as an extenuating instance, but this video has not incited violence nor is it hate speech.  Taking down videos like this could make Google more susceptible to the numerous requests they receive concerning the removal of videos.        While legally Google does have the right to take down any video, whether they use it or not people are similarly free to use Google’s services or not.  I feel that Google has chosen to give the power back to the people as much as possible through their lack of interfering with what is posted on Youtube.  Having the video on YouTube, doesn’t force anyone to watch it.  Google leaves it up to the current laws of a nation and the choices of its people to regulate.

Google’s business and moral interests are in alignment:  they largely do not want to control speech.  They’ve mostly taken a hands-off approach to regulation that coincides with the country the company originated in.  Some incidences occurred where Google played the moral police in subtle ways, like in the case of Ashley Madison website.  Google removed this website which helps to facilitate extramarital affairs, from autocomplete – making it more difficult to find unless you know what you’re looking for—and blocked its ads in the Google Content network.  Google had no right to begin blocking their ads or the website and should have followed their own rules of taking smaller role in regulating speech.  Look at it this way, if Google took a more active stance in regulation everyone would be aware of the beliefs of the people in charge.  If they were homophobic, chances are all of the videos concerning homosexuality would be removed.  If they were religious, anything that violated their beliefs could be removed.  If they hated violence, perhaps the Call of Duty commercials would no longer exist on YouTube.  Wouldn’t you rather they took a hand-off approach to regulation except for extenuating incidences like The Innocence of Muslims video?

 

Google and Internet Freedom Part I (The Plight of the Modern Day Big Brother)

Google is by no means, “Big Brother,” but it certainly has been making some big calls recently, with regards to its decision to keep the controversial video, “The Innocence of Muslims,” on YouTube.

http://www.youtube.com/watch?v=MAiOEV0v2RM

Despite requests from the government of the United States, Bangladesh, and Russia, Google has maintained the video on its main site, and only blocked it in India and Indonesia, where it violates local law.  To justify its decision, Google asserts that the video does not violate its terms of service or constitute hate speech because it is directed against Islam, not Muslims as a group.

This recent controversy brings to light grave ethical and political implications.

Should Google be the only party to have jurisdiction over YouTube?  What does freedom of speech and press look like in a realm that transcends national, religious, and geopolitical boundaries?

Google’s recent actions are problematic in 3 ways:

  1. In an effort to preserve free speech, Google premises its defense on imposing a blanket principle that other countries and cultures may not subscribe to.  Satire of Islam may not qualify as hate speech in the United States, but it certainly does figure into the definition that many countries, such as Bangladesh, espouse.  (For different standards of hate speech around the world, see: http://en.wikipedia.org/wiki/Hate_speech). By refusing to take down the video, Google is forcing these countries’ hands in banning YouTube altogether – which is what Bangladesh has done, and what Russia is considering.
  2. By refusing to assume a “Big Brother” role, Google is ironically becoming “Meta-Big Brother.”  Although protests have erupted in more than twenty countries, Google has only temporarily blocked the video in Egypt and Libya.  In response to U.S requests to take the video down in other protest-ridden nations, Google has responded that it will do so if these situations become exigent.  This begs the question, since when did Google become the main arbiter of geopolitics? Given that Google removes videos that violate local copyright law, it should accede to local standards for hate speech as well.  With regard sensitive videos such as “The Innocence of Muslims,” Google can be “hands-off” by allowing governments to make the final call.
  3. Finally, Google needs more restrictions on permissible video content beyond its terms of service and prohibiting hate speech.  Although the “Innocence of Muslims” may not hit close to home for many Americans, the video of the former U.S Ambassador to Libya, Christopher Stevens certainly does.  While it is certainly within the purview of Google’s policies to allow the video of former Ambassador Steven’s brutal treatment to be shown, is it ethical to allow the footage in light of the recent tragedy?

Google needs to recognize that the line between inaction and action is a dubious one.  Although it wants to be as unobtrusive as possible, the plight of the Modern Day Big Brother is that it has no choice but to involve itself in governing the internet realm.  Whether it chooses to keep the video up or to take it down is setting an unmistakable precedent.  Given that Google has already conceded that free speech needs to be reined in under certain circumstances, it should take the first step in further defining its place in the YouTube community.

*Not everyone agrees with my views. In fact, Kristian will be posting a rejoinder on Wednesday. Stay tuned!

Bioterrorism 1, U.S Censorship 0?

Media censorship is always a contentious issue, but recently, the battleground has moved to scientific research.

According to an Economist article, “Influenza and its Complications,” the U.S’s National Scientific Advisory Board for Biosecurity (NSABB) asked the world’s two leading scientific journals, Science and Nature, to censor research on the H5N1 flu virus.

Ron Fouchier of the Erasmus Medical Centre, in Rotterdam, and Yoshihiro Kawaoka of the University of Wisconsin-Madison have been working on a strain of the avian flu that can be transmitted person-to-person and were on the verge of publishing their results. Fearing that the details of their work may be used as a bioterrorism blueprint, the NSABB asked for a moratorium on the publication of this work.

Continue reading “Bioterrorism 1, U.S Censorship 0?”

The Man Who Cried Radiation

The boy who cried “wolf!” met an unfortunate end.  Last week, the man who cried “radiation!” did too.

According to a recent Reuter’s article, a Chinese man in the Zhejiang province, Chen, was jailed for 10 days and fined 500 yuan for spreading online rumors that Japanese radiation had contaminated Chinese waters.  Chen posted a note via an online-message board to urge his family members and friends to stockpile salt, to avoid seafood, and to spread the message.

Censorship and individual liberties are clearly the defining issues in this case; however, the more interesting question is whether is posting “RADIATION” on the internet is the same as screaming “FIRE” in a crowded theatre.  Is one more morally “okay” than the other?

Continue reading “The Man Who Cried Radiation”