<node id="299651">
  <nid>299651</nid>
  <type>external_news</type>
  <uid>
    <user id="27255"><![CDATA[27255]]></user>
  </uid>
  <created>1401196828</created>
  <changed>1475893628</changed>
  <title><![CDATA[The Military Wants To Teach Robots Right From Wrong]]></title>
  <body><![CDATA[<p>Ronald Arkin, an&nbsp;AI&nbsp;expert from Georgia Tech and author of the book&nbsp;<a href="http://www.amazon.com/Governing-Lethal-Behavior-Autonomous-Robots-ebook/dp/B008I9YG9G/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1399927574&amp;sr=1-1&amp;keywords=Ronald+Arkins"><em>Governing Lethal Behavior in Autonomous Robots,</em>&nbsp;</a>&nbsp;is a proponent of giving machines a moral compass. “It is not my belief that an unmanned system will be able to be perfectly ethical in the battlefield, but I am convinced that they can perform more ethically than human soldiers are capable of,” Arkin wrote in a&nbsp;<a href="http://www.cc.gatech.edu/ai/robot-lab/online-publications/formalizationv35.pdf">2007 research paper (PDF).</a>&nbsp;Part of the reason for that, he said, is that robots are capable of following rules of engagement to the letter, whereas humans are more&nbsp;inconsistent.</p>]]></body>
  <field_article_url>
    <item>
      <url><![CDATA[http://www.theatlantic.com/technology/archive/2014/05/the-military-wants-to-teach-robots-right-from-wrong/370855/]]></url>
      <title><![CDATA[]]></title>
    </item>
  </field_article_url>
  <field_publication>
    <item>
      <value><![CDATA[ JS Coon Building ]]></value>
    </item>
  </field_publication>
  <field_dateline>
    <item>
      <value>2014-05-14</value>
      <timezone></timezone>
    </item>
  </field_dateline>
  <field_media>
        </field_media>
  <og_groups>
          <item>142761</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[IRIM]]></item>
      </og_groups_both>
    <field_userdata>
      <![CDATA[]]>
  </field_userdata>
</node>
