{"670910":{"#nid":"670910","#data":{"type":"event","title":"PhD Defense by Nathaniel Moore Glaser","body":[{"value":"\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDear faculty members and fellow students,\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E \u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EYou are cordially invited to attend my dissertation defense on Friday, November 17th.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ETitle:\u0026nbsp;\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/strong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ECollaborative Perception and Planning for Multi-View and Multi-Robot Systems\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDate:\u0026nbsp;\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/strong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EFriday, November 17th, 2023\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ETime:\u0026nbsp;\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/strong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E1:00 PM - 3:00 PM EST\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ELocation:\u0026nbsp;\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/strong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ECoda C1315 (\u0022Grant Park\u0022) or\u0026nbsp;\u003Ca href=\u0022https:\/\/gatech.zoom.us\/j\/9381543689\u0022 target=\u0022_blank\u0022 title=\u0022https:\/\/gatech.zoom.us\/j\/9381543689\u0022\u003Ethis zoom link\u003C\/a\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ENathaniel Moore Glaser\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/strong\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ERobotics PhD Candidate\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ESchool of Electrical and Computer Engineering\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EGeorgia Institute of Technology\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ECommittee:\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/strong\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDr. Zsolt Kira (Advisor) - School of Interactive Computing, Georgia Institute of Technology\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDr. James Hays - School of Interactive Computing, Georgia Institute of Technology\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDr. Patricio Vela - School of Electrical and Computer Engineering, Georgia Institute of Technology\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDr. Pratap Tokekar - Department of Computer Science, University of Maryland\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDr. Milutin Pajovic - Senior Research Scientist, Analog Devices\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EAbstract:\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/strong\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThe field of robotics has historically focused on\u0026nbsp;\u003Cem\u003Eegocentric, single-agent\u003C\/em\u003E\u0026nbsp;systems.\u0026nbsp; However, robots in such systems are susceptible to single points of failure.\u0026nbsp; For instance, a single sensor failure or adverse environmental condition can render an isolated robot \u0022blind\u0022.\u0026nbsp; On the other hand, robots in\u0026nbsp;\u003Cem\u003Emulti-agent\u003C\/em\u003E\u0026nbsp;systems have the opportunity to overcome potentially dangerous blind spots, via communication and collaboration with their peers.\u0026nbsp; In this dissertation, we address these communication-critical settings with\u0026nbsp;\u003Cstrong\u003ECollaborative Perception and Planning for Multi-View and Multi-Robot Systems\u003C\/strong\u003E.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EFirst, we develop several learned communication and spatial registration schemes for\u0026nbsp;\u003Cstrong\u003E\u003Cem\u003Eimmediate collaboration\u003C\/em\u003E\u003C\/strong\u003E.\u0026nbsp; These schemes allow us to efficiently communicate and align visual observations between moving agents for\u0026nbsp;\u003Cem\u003Ea single instant in time\u003C\/em\u003E.\u0026nbsp; We demonstrate improved egocentric semantic segmentation accuracy for a swarm of obstruction-prone aerial quadrotors.\u0026nbsp;\u0026nbsp;Second, we extend our methods to\u0026nbsp;\u003Cstrong\u003E\u003Cem\u003Eshort-term\u0026nbsp;collaboration\u003C\/em\u003E\u003C\/strong\u003E, where robots compress short sequences of observations into easily communicable representations, such as maps and costmaps.\u0026nbsp; In addition to conserving bandwidth, our methods (1) reduce task duration for multi-robot mapping and (2) improve safety for planning, especially in self-driving settings where egocentric vision has potentially fatal blind spots.\u0026nbsp;\u0026nbsp;Third, we stress-test our methods on\u0026nbsp;\u003Cstrong\u003E\u003Cem\u003Elarge-scale, long-term collaboration\u003C\/em\u003E\u003C\/strong\u003E.\u0026nbsp; In the setting of production-scale robotic farming, we show how collaborative perception is capable of handling\u0026nbsp;\u003Cem\u003Elarge\u003C\/em\u003E\u0026nbsp;numbers of\u0026nbsp;\u003Cem\u003Eheterogeneous\u003C\/em\u003E\u0026nbsp;robots by corresponding\u0026nbsp;\u003Cem\u003Eambiguous\u003C\/em\u003E\u0026nbsp;data over\u0026nbsp;\u003Cem\u003Elong\u003C\/em\u003E\u0026nbsp;stretches of time.\u0026nbsp;\u0026nbsp;We hope that our work on collaborative perception will help transition single-agent systems to robust and efficient multi-agent capabilities.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ECollaborative Perception and Planning for Multi-View and Multi-Robot Systems\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Collaborative Perception and Planning for Multi-View and Multi-Robot Systems"}],"uid":"27707","created_gmt":"2023-11-06 20:36:52","changed_gmt":"2023-11-06 20:36:52","author":"Tatianna Richardson","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2023-11-17T13:00:00-05:00","event_time_end":"2023-11-17T15:00:00-05:00","event_time_end_last":"2023-11-17T15:00:00-05:00","gmt_time_start":"2023-11-17 18:00:00","gmt_time_end":"2023-11-17 20:00:00","gmt_time_end_last":"2023-11-17 20:00:00","rrule":null,"timezone":"America\/New_York"},"location":"Coda C1315 (\u0022Grant Park\u0022) or this zoom link","extras":[],"groups":[{"id":"221981","name":"Graduate Studies"}],"categories":[],"keywords":[{"id":"100811","name":"Phd Defense"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1788","name":"Other\/Miscellaneous"}],"invited_audience":[{"id":"78771","name":"Public"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}