Feb 242011
 
 February 24, 2011  Posted by  Tagged with: ,

Last week, something amazing happened: a computer took on two human beings on a game of Jeopardy!–and won. Basically, here’s what happened: IBM designed a supercomputer paired with powerful algorithms that had the ability to interpret a question that was asked using normal grammatical syntax, sift through a large number of articles and books (and from sources like Wikipedia), and then finally find an answer to that question. Pretty amazing.

Watson, IBM’s Jeopardy! computer, doesn’t even look like a computer. Watson has a “face,” consisting of a screen that displays a constantly changing pattern based on Watson’s confidence when answering questions. Watson doesn’t require a human to run–the computer reads the same questions that are provided to the human players, and responds using a text-to-speech system. Watson is arguably more humanlike than any other computer ever created.

Photo Credit: Vaxomatic via Flickr

In 1950, Alan Turing created a system to determine whether or not a computer demonstrated artificial intelligence. Essentially, it revolved around having a computer and a human participate in a conversation. When the human could not distinguish the computer from a person, the computer was considered artificially intelligent.

Watson is by no means a human, nor does it demonstrate true artificial intelligence. Some of Watson’s answers during the Jeopardy! tournament showed that it wasn’t a human. For example, in response to the question “Gambler Charles Wells is believed to have inspired the song “The Man Who” did this “at Monte Carlo.” to which Watson responded “Song?” (Correct answer: Broke the Bank). Another time, Watson stated that Toronto was in the United States (it’s not–it’s in Canada).

So Watson isn’t human–but he’s in a place that definitely blurs the lines between humans and computers. Which, naturally, raises many questions about how we view and treat Watson (and perhaps computers in the future). Can we simply “unplug” Watson? We wouldn’t “unplug” a human, so can we kill a computer that has humanistic traits? Can we make decisions for Watson? If true artificial intelligence ever emerges, how do our responses to these questions change?

Watson can’t make decisions on its own, and thus really doesn’t have true “intelligence”–or does it? It could be argued that a young child can’t make decisions on its own, but we still treat children as fully fledged humans. What about Watson? Is it a fully fledged enough human?

One of the points which has been raised by critics is that Watson will only show its true abilities when it is able to accomplish a task that is beneficial to society-perhaps in a field such as healthcare or education, as opposed to merely a Jeopardy! game. But this brings up a dilemma-do we judge artificial intelligence based on its utility for humans, or do we judge the intelligence aspect of the machine itself, regardless of the function it performs? And how should this question be treated when we look at other people and animals?

The answers to many of these questions will never be 100% certain, but as we move into an age where the possibility of artificial intelligence is seemingly very real, these are questions that our generation and those who follow will have to acknowledge. And with these questions, we will once again have to reevaluate our view of the world and how we treat other people and especially animals. Watson’s television entertainment days are probably gone, but its impact on artificial intelligence will continue to mold society, and with that, the way we evaluate the ethics of our actions.

  2 Responses to “Unplugging Watson”

  1. The main difference between Watson and human beings is that Watson is just but a bunch of algorithms cleverly lumped together to give it the ability to mimic human traits. Unlike a young child who grows over time, Watson does not have the ability to grow its intelligence naturally (with no external modifications, Watson will, in ten years time, have the same capabilities it has today…or so I think). Does it believe (if at all it has this ability) in anything save from what it reads, or even more interesting, does it learn from its mistakes? Suppose Watson were to be your local doctor and it gave you the wrong prescription, will its algorithms realize that it made a mistake and correct it the next time or will it keep making the same mistakes over and over until an engineer fixes the bug? If Watson were to be developed until a stage where it is capable of learning on its own and making its own decisions, could we possibly punish it by putting it a prison or unplugging it for a while, whenever it overrides its algorithmic specifications and breaks the law? Or will we view the owners of Watson as the ones who broke the law? Or will IBM be the culprits? If raw data used to determine salary payments were input by a clerk into an Excel sheet and the Excel program gave the wrong salary figures, do the people receiving the wrong salaries sue the clerk, the company or Microsoft?

    • I completely agree that Watson does not actually learn from its mistakes (as far as I am aware), and that as of right now, its algorithms must be tweaked when it makes mistakes. I definitely feel that Watson should not be treated as a human, because as you describe, it lacks the fundamentals that make humans intelligent. This brings up two questions though. Firstly, it is agreeable that in the future, computers like Watson will have evolving algorithms, which adjust to its environment and learn from its mistakes. There are some programs that do this right now, such as those used by Netflix for movie recommendations. What happens when these different sets of algorithms begin to be paired together? How do we treat computers then? Secondly, there is the question of humans that do not satisfy the requirements for intelligence that you have listed. A person with certain learning disorders or neurological conditions may not be able to learn, but does that mean that we do not treat them like humans?

      As for punishments and law breaking, most computers (even those running Artificial Intelligence software) follow Asimov’s laws, which prevent the computer from “breaking away” and ignoring the commands of human users. However, your question is a valid one, and one that would have to be examined. In my opinion, the owners of the computer would be responsible for the actions of the computer. For example, with your example of Excel, the company would be the group sued by the workers as Microsoft has simply provided a tool for use by the company, and isn’t responsible for the use of that tool. A hammer company isn’t responsible for those times during spring break when your friend hits you with a hammer. Also, the clerk may be partially at fault, but as an agent of the company, it would most likely be the company that would be sued, and not the clerk. Though the clerk might lose his or her job.

Leave a Reply to Nihir Cancel reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>