<node id="670910">
  <nid>670910</nid>
  <type>event</type>
  <uid>
    <user id="27707"><![CDATA[27707]]></user>
  </uid>
  <created>1699303012</created>
  <changed>1699303012</changed>
  <title><![CDATA[PhD Defense by Nathaniel Moore Glaser]]></title>
  <body><![CDATA[<p><span><span><span><span><span><span>Dear faculty members and fellow students,</span></span></span></span> </span></span></p>

<p>&nbsp;</p>

<p><span><span><span><span><span><span>You are cordially invited to attend my dissertation defense on Friday, November 17th.</span></span></span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><span><strong><span><span><span>Title:&nbsp;</span></span></span></strong><span><span><span>Collaborative Perception and Planning for Multi-View and Multi-Robot Systems</span></span></span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><span><strong><span><span><span>Date:&nbsp;</span></span></span></strong><span><span><span>Friday, November 17th, 2023</span></span></span></span></span></span></p>

<p><span><span><span><strong><span><span><span>Time:&nbsp;</span></span></span></strong><span><span><span>1:00 PM - 3:00 PM EST</span></span></span></span></span></span></p>

<p><span><span><span><strong><span><span><span>Location:&nbsp;</span></span></span></strong><span><span><span>Coda C1315 ("Grant Park") or&nbsp;<a href="https://gatech.zoom.us/j/9381543689" target="_blank" title="https://gatech.zoom.us/j/9381543689">this zoom link</a></span></span></span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><span><strong><span><span><span><span>Nathaniel Moore Glaser</span></span></span></span></strong></span></span></span></p>

<p><span><span><span><span><span><span><span>Robotics PhD Candidate</span></span></span></span></span></span></span></p>

<p><span><span><span><span><span><span><span>School of Electrical and Computer Engineering</span></span></span></span></span></span></span></p>

<p><span><span><span><span><span><span><span>Georgia Institute of Technology</span></span></span></span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><span><strong><span><span><span><span>Committee:</span></span></span></span></strong></span></span></span></p>

<p><span><span><span><span><span><span><span>Dr. Zsolt Kira (Advisor) - School of Interactive Computing, Georgia Institute of Technology</span></span></span></span></span></span></span></p>

<p><span><span><span><span><span><span><span>Dr. James Hays - School of Interactive Computing, Georgia Institute of Technology</span></span></span></span></span></span></span></p>

<p><span><span><span><span><span><span><span>Dr. Patricio Vela - School of Electrical and Computer Engineering, Georgia Institute of Technology</span></span></span></span></span></span></span></p>

<p><span><span><span><span><span><span><span>Dr. Pratap Tokekar - Department of Computer Science, University of Maryland</span></span></span></span></span></span></span></p>

<p><span><span><span><span><span><span><span>Dr. Milutin Pajovic - Senior Research Scientist, Analog Devices</span></span></span></span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><span><strong><span><span><span><span>Abstract:</span></span></span></span></strong></span></span></span></p>

<p><span><span><span><span><span><span><span>The field of robotics has historically focused on&nbsp;<em>egocentric, single-agent</em>&nbsp;systems.&nbsp; However, robots in such systems are susceptible to single points of failure.&nbsp; For instance, a single sensor failure or adverse environmental condition can render an isolated robot "blind".&nbsp; On the other hand, robots in&nbsp;<em>multi-agent</em>&nbsp;systems have the opportunity to overcome potentially dangerous blind spots, via communication and collaboration with their peers.&nbsp; In this dissertation, we address these communication-critical settings with&nbsp;<strong>Collaborative Perception and Planning for Multi-View and Multi-Robot Systems</strong>.</span></span></span></span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><span><span><span><span><span>First, we develop several learned communication and spatial registration schemes for&nbsp;<strong><em>immediate collaboration</em></strong>.&nbsp; These schemes allow us to efficiently communicate and align visual observations between moving agents for&nbsp;<em>a single instant in time</em>.&nbsp; We demonstrate improved egocentric semantic segmentation accuracy for a swarm of obstruction-prone aerial quadrotors.&nbsp;&nbsp;Second, we extend our methods to&nbsp;<strong><em>short-term&nbsp;collaboration</em></strong>, where robots compress short sequences of observations into easily communicable representations, such as maps and costmaps.&nbsp; In addition to conserving bandwidth, our methods (1) reduce task duration for multi-robot mapping and (2) improve safety for planning, especially in self-driving settings where egocentric vision has potentially fatal blind spots.&nbsp;&nbsp;Third, we stress-test our methods on&nbsp;<strong><em>large-scale, long-term collaboration</em></strong>.&nbsp; In the setting of production-scale robotic farming, we show how collaborative perception is capable of handling&nbsp;<em>large</em>&nbsp;numbers of&nbsp;<em>heterogeneous</em>&nbsp;robots by corresponding&nbsp;<em>ambiguous</em>&nbsp;data over&nbsp;<em>long</em>&nbsp;stretches of time.&nbsp;&nbsp;We hope that our work on collaborative perception will help transition single-agent systems to robust and efficient multi-agent capabilities.</span></span></span></span></span></span></span></p>

<p>&nbsp;</p>
]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Collaborative Perception and Planning for Multi-View and Multi-Robot Systems]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p><span><span><span>Collaborative Perception and Planning for Multi-View and Multi-Robot Systems</span></span></span></p>
]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2023-11-17T13:00:00-05:00]]></value>
      <value2><![CDATA[2023-11-17T15:00:00-05:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[Coda C1315 ("Grant Park") or this zoom link]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>221981</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Graduate Studies]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>100811</tid>
        <value><![CDATA[Phd Defense]]></value>
      </item>
      </field_keywords>
  <userdata><![CDATA[]]></userdata>
</node>
