<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
		>
<channel>
	<title>Comments on: Unplugging Watson</title>
	<atom:link href="http://kenan.ethics.duke.edu/teamkenan/unplugging-watson/feed/" rel="self" type="application/rss+xml" />
	<link>http://kenan.ethics.duke.edu/teamkenan/unplugging-watson/</link>
	<description></description>
	<lastBuildDate>Sun, 10 Feb 2013 23:18:02 +0000</lastBuildDate>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=3.4.1</generator>
	<item>
		<title>By: Nihir</title>
		<link>http://kenan.ethics.duke.edu/teamkenan/unplugging-watson/#comment-25</link>
		<dc:creator>Nihir</dc:creator>
		<pubDate>Thu, 03 Mar 2011 21:44:51 +0000</pubDate>
		<guid isPermaLink="false">http://devilsdilemma.wordpress.com/?p=336#comment-25</guid>
		<description>I completely agree that Watson does not actually learn from its mistakes (as far as I am aware), and that as of right now, its algorithms must be tweaked when it makes mistakes. I definitely feel that Watson should not be treated as a human, because as you describe, it lacks the fundamentals that make humans intelligent. This brings up two questions though. Firstly, it is agreeable that in the future, computers like Watson will have evolving algorithms, which adjust to its environment and learn from its mistakes. There are some programs that do this right now, such as those used by Netflix for movie recommendations. What happens when these different sets of algorithms begin to be paired together? How do we treat computers then? Secondly, there is the question of humans that do not satisfy the requirements for intelligence that you have listed. A person with certain learning disorders or neurological conditions may not be able to learn, but does that mean that we do not treat them like humans?

As for punishments and law breaking, most computers (even those running Artificial Intelligence software) follow Asimov&#039;s laws, which prevent the computer from &quot;breaking away&quot; and ignoring the commands of human users. However, your question is a valid one, and one that would have to be examined. In my opinion, the owners of the computer would be responsible for the actions of the computer. For example, with your example of Excel, the company would be the group sued by the workers as Microsoft has simply provided a tool for use by the company, and isn&#039;t responsible for the use of that tool. A hammer company isn&#039;t responsible for those times during spring break when your friend hits you with a hammer. Also, the clerk may be partially at fault, but as an agent of the company, it would most likely be the company that would be sued, and not the clerk. Though the clerk might lose his or her job.</description>
		<content:encoded><![CDATA[<p>I completely agree that Watson does not actually learn from its mistakes (as far as I am aware), and that as of right now, its algorithms must be tweaked when it makes mistakes. I definitely feel that Watson should not be treated as a human, because as you describe, it lacks the fundamentals that make humans intelligent. This brings up two questions though. Firstly, it is agreeable that in the future, computers like Watson will have evolving algorithms, which adjust to its environment and learn from its mistakes. There are some programs that do this right now, such as those used by Netflix for movie recommendations. What happens when these different sets of algorithms begin to be paired together? How do we treat computers then? Secondly, there is the question of humans that do not satisfy the requirements for intelligence that you have listed. A person with certain learning disorders or neurological conditions may not be able to learn, but does that mean that we do not treat them like humans?</p>
<p>As for punishments and law breaking, most computers (even those running Artificial Intelligence software) follow Asimov&#8217;s laws, which prevent the computer from &#8220;breaking away&#8221; and ignoring the commands of human users. However, your question is a valid one, and one that would have to be examined. In my opinion, the owners of the computer would be responsible for the actions of the computer. For example, with your example of Excel, the company would be the group sued by the workers as Microsoft has simply provided a tool for use by the company, and isn&#8217;t responsible for the use of that tool. A hammer company isn&#8217;t responsible for those times during spring break when your friend hits you with a hammer. Also, the clerk may be partially at fault, but as an agent of the company, it would most likely be the company that would be sued, and not the clerk. Though the clerk might lose his or her job.</p>
]]></content:encoded>
	</item>
	<item>
		<title>By: Leonard Ng'eno</title>
		<link>http://kenan.ethics.duke.edu/teamkenan/unplugging-watson/#comment-24</link>
		<dc:creator>Leonard Ng'eno</dc:creator>
		<pubDate>Sat, 26 Feb 2011 07:07:49 +0000</pubDate>
		<guid isPermaLink="false">http://devilsdilemma.wordpress.com/?p=336#comment-24</guid>
		<description>The main difference between Watson and human beings is that Watson is just but a bunch of algorithms cleverly lumped together to give it the ability to mimic human traits. Unlike a young child who grows over time, Watson does not have the ability to grow its intelligence naturally (with no external modifications, Watson will, in ten years time, have the same capabilities it has today...or so I think). Does it believe (if at all it has this ability) in anything save from what it reads, or even more interesting, does it learn from its mistakes? Suppose Watson were to be your local doctor and it gave you the wrong prescription, will its algorithms realize that it made a mistake and correct it the next time or will it keep making the same mistakes over and over until an engineer fixes the bug? If Watson were to be developed until a stage where it is capable of learning on its own and making its own decisions, could we possibly punish it by putting it a prison or unplugging it for a while, whenever it overrides its algorithmic specifications and breaks the law? Or will we view the owners of Watson as the ones who broke the law? Or will IBM be the culprits? If raw data used to determine salary payments were input by a clerk into an Excel sheet and the Excel program gave the wrong salary figures, do the people receiving the wrong salaries sue the clerk, the company or Microsoft?</description>
		<content:encoded><![CDATA[<p>The main difference between Watson and human beings is that Watson is just but a bunch of algorithms cleverly lumped together to give it the ability to mimic human traits. Unlike a young child who grows over time, Watson does not have the ability to grow its intelligence naturally (with no external modifications, Watson will, in ten years time, have the same capabilities it has today&#8230;or so I think). Does it believe (if at all it has this ability) in anything save from what it reads, or even more interesting, does it learn from its mistakes? Suppose Watson were to be your local doctor and it gave you the wrong prescription, will its algorithms realize that it made a mistake and correct it the next time or will it keep making the same mistakes over and over until an engineer fixes the bug? If Watson were to be developed until a stage where it is capable of learning on its own and making its own decisions, could we possibly punish it by putting it a prison or unplugging it for a while, whenever it overrides its algorithmic specifications and breaks the law? Or will we view the owners of Watson as the ones who broke the law? Or will IBM be the culprits? If raw data used to determine salary payments were input by a clerk into an Excel sheet and the Excel program gave the wrong salary figures, do the people receiving the wrong salaries sue the clerk, the company or Microsoft?</p>
]]></content:encoded>
	</item>
</channel>
</rss>