<node id="602011">
  <nid>602011</nid>
  <type>news</type>
  <uid>
    <user id="33939"><![CDATA[33939]]></user>
  </uid>
  <created>1517957454</created>
  <changed>1517957454</changed>
  <title><![CDATA[Georgia Tech Artificial Intelligence Research Includes Collaborative Approaches with Humans, Automating Content, and More]]></title>
  <body><![CDATA[<p>Georgia Tech&rsquo;s latest artificial intelligence research, presented Feb. 2-7&nbsp;at the&nbsp;<a href="https://aaai.org/Conferences/AAAI-18/">AAAI Conference on Artificial Intelligence</a>&nbsp;in New Orleans, demonstrates some of the many approaches to developing capabilities for the next generation of autonomous machines.&nbsp;</p>

<p>Four faculty from the Schools of Interactive Computing and Computational Science and Engineering had research accepted into the program. They include Interactive Computing&rsquo;s&nbsp;<strong>Dhruv Batra</strong>,&nbsp;<strong>Ashok Goel</strong>&nbsp;and&nbsp;<strong>Mark Riedl</strong>, and CSE&rsquo;s&nbsp;<strong>Le Song</strong>.&nbsp;</p>

<p><strong>Invited talks at the conference include:&nbsp;</strong></p>

<ul>
	<li>Ashok Goel -&nbsp; &ldquo;Jill Watson, Family, and Friends: Experiments in Building Automated Teaching Assistants&rdquo; (Also a panelist on &ldquo;Next Big Steps in AI for Education&rdquo;)&nbsp;</li>
	<li>Dhruv Batra -&nbsp; Emerging Topics Program in &ldquo;Human-AI Collaboration&rdquo;&nbsp;</li>
	<li>Charles Isbell - &ldquo;How Machines Learn Best from Humans&rdquo;</li>
</ul>

<h3><strong>Building for Creativity</strong></h3>

<p>Among the accepted Georgia Tech research is work on&nbsp;deep neural networks to teach AI agents how to write and construct narratives with a human collaborator, allowing for stories to be generated in new ways.&nbsp;</p>

<p>Researchers have come up with a method to simplify sentences into &ldquo;events,&rdquo; akin to an elementary school grammar lesson. Understanding the subject, verb and other constituent parts of a sentence makes it easier for the computer to generate a reasonable next event in a story. That AI&rsquo;s event is translated back into a human-readable sentence.&nbsp;</p>

<p>&ldquo;We can use these methods in an AI that goes back and forth with someone, co-creating a brand new story in real-time,&rdquo; says Lara Martin, Ph.D. candidate in Human-Centered Computing and lead researcher. &ldquo;More importantly, this system will be able to continue a story about any topic, which is crucial for improvisation.&rdquo;</p>

<p>&nbsp;</p>

<p>Mark Riedl, director of the Entertainment Intelligence Lab and co-author on the paper, has developed many systems to advance AI creativity as a domain that can spur growth in the field.&nbsp;</p>

<p>&ldquo;As human-AI interaction becomes more common, it becomes more important for AIs to be able to engage in open-world improvisational storytelling,&rdquo; he says. &ldquo;This is because it enables AIs to communicate with humans in a natural way without sacrificing the human&rsquo;s perception of agency.&rdquo;</p>

<h3><strong>Creating Context for Visual Media</strong></h3>

<p>Another Georgia Tech innovation is defining a method to create captions for images from any digital file on- or offline. The research team studied current machine learning models for automatic image captioning and assessed that they had limitations in providing robust output. The team looked to improve on what they considered boring, generic descriptions. Their approach, Diverse Beam Search, is an algorithm that tries to capture the richness of language by generating a diverse set of descriptions that are in general more preferred by humans.</p>

<p>&nbsp;&ldquo;We categorized images based on their complexity and observed that on &lsquo;complex&rsquo; scenes, say, a view of a kitchen with multiple objects,&nbsp;our method indeed resulted in significant improvements in captions,&rdquo; says Ashwin Vijayakumar, Ph.D. student in Computer Science and lead author.</p>

<p>Simpler images were tougher for the AI system - the internet&rsquo;s many cat closeups could only be described in so many ways, according to Vijayakumar.&nbsp;&nbsp;</p>

<p>Pictures can be uploaded on the system and tested here in real-time:&nbsp;<a href="http://dbs.cloudcv.org/">http://dbs.cloudcv.org/</a>.</p>

<p>&nbsp;</p>

<h3><strong>AAAI 2018 Conference&nbsp;</strong></h3>

<p><em>PAPERS</em></p>

<p><strong>Diverse Beam Search for Improved Description of Complex Scenes</strong></p>

<p><em>Ashwin Vijayakumar, Michael Cogswell, Ramprasaath Selvaraju, Qing Sun, Stefan Lee, David Crandall, Dhruv Batra</em></p>

<p><strong>The Structural Affinity Method for Solving the Raven&#39;s Progressive Matrices Test for Intelligence</strong></p>

<p><em>Snejana Shegheva, Ashok Goel</em></p>

<p><strong>Event Representations for Automated Story Generation with Deep Neural Nets</strong></p>

<p><em>Lara Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, Mark Riedl</em></p>

<p><strong>Deep Semi-Random Features for Nonlinear Function Approximation</strong></p>

<p><em>Kenji Kawaguchi, Bo Xie, Le Song</em></p>

<p><strong>Learning Conditional Generative Models for Temporal Point Processes</strong></p>

<p><em>Shuai Xiao, hongteng Xu, Junchi Yan, Mehrdad Farajtabar, Xiaokang Yang, Le Song, Hongyuan Zha</em></p>

<p><strong>Variational Reasoning for Question Answering with Knowledge Graph</strong></p>

<p><em>Yuyu Zhang, Hanjun Dai, Zornitsa Kozareva, Alexander Smola, Le Song</em></p>

<p><em>WORKSHOPS</em></p>

<p><strong>Knowledge Extraction from Games</strong></p>

<p><em>Matthew Guzdial (committee)</em></p>

<p><em>COMMITTEES</em></p>

<p>Computational Sustainability Co-chair - Bistra Dilkina</p>

<h3><strong>AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society</strong></h3>

<p><em>PAPERS</em></p>

<p><strong>Jill Watson Doesn&rsquo;t Care if You&rsquo;re Pregnant: Grounding AI Ethics in Empirical Studies</strong></p>

<p><em>Bobbie Eicher, Lalith Polepeddi and Ashok Goel</em></p>

<p><em>COMMITTEES</em></p>

<p>Student Track, AI and Law Program Chair -&nbsp;Deven Desai</p>
]]></body>
  <field_subtitle>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_subtitle>
  <field_dateline>
    <item>
      <value>2018-02-06T00:00:00-05:00</value>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_dateline>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Four faculty from the Schools of Interactive Computing and Computational Science and Engineering had research accepted into the program. They include Interactive Computing’s Dhruv Batra, Ashok Goel and Mark Riedl, and CSE’s Le Song.]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_summary>
  <field_media>
          <item>
        <nid>
          <node id="602010">
            <nid>602010</nid>
            <type>image</type>
            <title><![CDATA[AAAI 2018 logo]]></title>
            <body><![CDATA[]]></body>
                          <field_image>
                <item>
                  <fid>229445</fid>
                  <filename><![CDATA[Screen Shot 2018-02-06 at 5.44.50 PM.png]]></filename>
                  <filepath><![CDATA[/sites/default/files/images/Screen%20Shot%202018-02-06%20at%205.44.50%20PM.png]]></filepath>
                  <file_full_path><![CDATA[http://tlwarc.hg.gatech.edu//sites/default/files/images/Screen%20Shot%202018-02-06%20at%205.44.50%20PM.png]]></file_full_path>
                  <filemime>image/png</filemime>
                  <image_740><![CDATA[]]></image_740>
                  <image_alt><![CDATA[AAAI 2018 logo]]></image_alt>
                </item>
              </field_image>
            
                      </node>
        </nid>
      </item>
      </field_media>
  <field_contact_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_contact_email>
  <field_location>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_location>
  <field_contact>
    <item>
      <value><![CDATA[<p>David Mitchell</p>

<p>Communications Officer</p>

<p>david.mitchell@cc.gatech.edu</p>
]]></value>
    </item>
  </field_contact>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <!--  TO DO: correct to not conflate categories and news room topics  -->
  <!--  Disquisition: it's funny how I write these TODOs and then never
         revisit them. It's as though the act of writing the thing down frees me
         from the responsibility to actually solve the problem. But what can I
         say? There are more problems than there's time to solve.  -->
  <links_related> </links_related>
  <files> </files>
  <og_groups>
          <item>47223</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
      </og_groups_both>
  <field_categories>
      </field_categories>
  <core_research_areas>
          <term tid="39501"><![CDATA[People and Technology]]></term>
      </core_research_areas>
  <field_news_room_topics>
      </field_news_room_topics>
  <links_related>
          <link>
      <url>https://aaai.org/Conferences/AAAI-18/</url>
      <title></title>
      </link>
          <link>
      <url>https://www.ic.gatech.edu/content/artificial-intelligence-machine-learning</url>
      <title></title>
      </link>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>47223</item>
          <item>50876</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[College of Computing]]></item>
          <item><![CDATA[School of Interactive Computing]]></item>
      </og_groups_both>
  <field_keywords>
          <item>
        <tid>98401</tid>
        <value><![CDATA[AAAI]]></value>
      </item>
          <item>
        <tid>177034</tid>
        <value><![CDATA[AAAI 2018]]></value>
      </item>
          <item>
        <tid>2556</tid>
        <value><![CDATA[artificial intelligence]]></value>
      </item>
          <item>
        <tid>177035</tid>
        <value><![CDATA[Thirty-second AAAI Conference on Artificial Intelligence]]></value>
      </item>
          <item>
        <tid>173615</tid>
        <value><![CDATA[dhruv batra]]></value>
      </item>
          <item>
        <tid>127171</tid>
        <value><![CDATA[Le Song]]></value>
      </item>
          <item>
        <tid>112431</tid>
        <value><![CDATA[ashok goel]]></value>
      </item>
          <item>
        <tid>66281</tid>
        <value><![CDATA[Mark Riedl]]></value>
      </item>
          <item>
        <tid>166848</tid>
        <value><![CDATA[School of Interactive Computing]]></value>
      </item>
          <item>
        <tid>654</tid>
        <value><![CDATA[College of Computing]]></value>
      </item>
      </field_keywords>
  <field_userdata>
      <![CDATA[]]>
  </field_userdata>
</node>
