Loading

Unplugging Watson

Last week, something amazing happened: a computer took on two human beings on a game of Jeopardy!–and won. Basically, here’s what happened: IBM designed a supercomputer paired with powerful algorithms that had the ability to interpret a question that was asked using normal grammatical syntax, sift through a large number of articles and books (and from sources like Wikipedia), and then finally find an answer to that question. Pretty amazing.

Watson, IBM’s Jeopardy! computer, doesn’t even look like a computer. Watson has a “face,” consisting of a screen that displays a constantly changing pattern based on Watson’s confidence when answering questions. Watson doesn’t require a human to run–the computer reads the same questions that are provided to the human players, and responds using a text-to-speech system. Watson is arguably more humanlike than any other computer ever created.

5185721330_bec41806b7_z

In 1950, Alan Turing created a system to determine whether or not a computer demonstrated artificial intelligence. Essentially, it revolved around having a computer and a human participate in a conversation. When the human could not distinguish the computer from a person, the computer was considered artificially intelligent.

Watson is by no means a human, nor does it demonstrate true artificial intelligence. Some of Watson’s answers during the Jeopardy! tournament showed that it wasn’t a human. For example, in response to the question “Gambler Charles Wells is believed to have inspired the song “The Man Who” did this “at Monte Carlo.” to which Watson responded “Song?” (Correct answer: Broke the Bank). Another time, Watson stated that Toronto was in the United States (it’s not–it’s in Canada).

So Watson isn’t human–but he’s in a place that definitely blurs the lines between humans and computers. Which, naturally, raises many questions about how we view and treat Watson (and perhaps computers in the future). Can we simply “unplug” Watson? We wouldn’t “unplug” a human, so can we kill a computer that has humanistic traits? Can we make decisions for Watson? If true artificial intelligence ever emerges, how do our responses to these questions change?

Watson can’t make decisions on its own, and thus really doesn’t have true “intelligence”–or does it? It could be argued that a young child can’t make decisions on its own, but we still treat children as fully fledged humans. What about Watson? Is it a fully fledged enough human?

One of the points which has been raised by critics is that Watson will only show its true abilities when it is able to accomplish a task that is beneficial to society-perhaps in a field such as healthcare or education, as opposed to merely a Jeopardy! game. But this brings up a dilemma-do we judge artificial intelligence based on its utility for humans, or do we judge the intelligence aspect of the machine itself, regardless of the function it performs? And how should this question be treated when we look at other people and animals?

The answers to many of these questions will never be 100% certain, but as we move into an age where the possibility of artificial intelligence is seemingly very real, these are questions that our generation and those who follow will have to acknowledge. And with these questions, we will once again have to reevaluate our view of the world and how we treat other people and especially animals. Watson’s television entertainment days are probably gone, but its impact on artificial intelligence will continue to mold society, and with that, the way we evaluate the ethics of our actions.