{"671269":{"#nid":"671269","#data":{"type":"event","title":"PhD Defense by Chia-Wen Kuo","body":[{"value":"\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EYou are cordially invited to attend my dissertation defense on Wednesday, November 29th.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cul\u003E\r\n\t\u003Cli\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003ETitle\u003C\/strong\u003E: Knowledge-Augmented Vision-and-Language Assistant\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003EDate\u003C\/strong\u003E: Wednesday, November 29th, 2023\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003ETime\u003C\/strong\u003E: 11:00 AM - 12:30 PM PST\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/li\u003E\r\n\t\u003Cli\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ELocation:\u0026nbsp;\u003Ca href=\u0022https:\/\/gatech.zoom.us\/j\/4326036450\u0022 title=\u0022https:\/\/gatech.zoom.us\/j\/4326036450\u0022\u003E\u003Cspan\u003Ethis zoom link\u003C\/span\u003E\u003C\/a\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/li\u003E\r\n\u003C\/ul\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003E\u003Cspan\u003EChia-Wen Kuo\u003C\/span\u003E\u003C\/strong\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ERobotics PhD Candidate\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003ESchool of Electrical and Computer Engineering\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EGeorgia Institute of Technology\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003E\u003Cspan\u003ECommittee\u003C\/span\u003E\u003C\/strong\u003E\u003Cspan\u003E:\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDr. Zsolt Kira (Advisor) - School of Interactive Computing, Georgia Institute of Technology\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDr. Chao Zhang - School of Computational Science and Engineering, Georgia Institute of Technology\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDr. Chunyuan Li - Principal Research Scientist, Microsoft Research\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDr. Judy Hoffman - School of Interactive Computing, Georgia Institute of Technology\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EDr. Larry Heck - School of Electrical and Computer Engineering, Georgia Institute of Technology\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u0026nbsp;\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cstrong\u003E\u003Cspan\u003EAbstract\u003C\/span\u003E\u003C\/strong\u003E\u003Cspan\u003E:\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EThe fusion of vision and language (VL) in artificial intelligence represents a crucial advancement in the creation of truly intelligent systems, echoing a fundamental aspect of human cognition: the ability to see and articulate the world. This integration has transformative potential across various sectors, notably enhancing human interaction with technology. However, developing effective VL models is challenging due to often incomplete or missing knowledge in both vision and language components. This limitation impacts the models\u0027 ability to accurately describe visual contents and answer complex, real-world questions. My research, presented in a series of three works, addresses these challenges. The first work, Xmodal-Ctx, introduces external knowledge into VL models to overcome their contextual limitations. The second, HAAV, expands this by integrating a diverse array of knowledge sources, enhancing the models\u0027 understanding of visual content. The final work, K-Aug, scales these concepts to larger, more complex multimodal models, addressing the integration and application of high-quality knowledge sources. This structured approach aims to bridge the knowledge gaps in VL models, thereby enhancing their overall interpretative and descriptive capabilities in a context-rich and linguistically coherent manner.\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n\r\n\u003Cp\u003E\u0026nbsp;\u003C\/p\u003E\r\n","summary":"","format":"limited_html"}],"field_subtitle":"","field_summary":[{"value":"\u003Cp\u003E\u003Cspan\u003E\u003Cspan\u003E\u003Cspan\u003EKnowledge-Augmented Vision-and-Language Assistant\u003C\/span\u003E\u003C\/span\u003E\u003C\/span\u003E\u003C\/p\u003E\r\n","format":"limited_html"}],"field_summary_sentence":[{"value":"Knowledge-Augmented Vision-and-Language Assistant"}],"uid":"27707","created_gmt":"2023-11-27 21:32:25","changed_gmt":"2023-11-27 21:32:25","author":"Tatianna Richardson","boilerplate_text":"","field_publication":"","field_article_url":"","field_event_time":{"event_time_start":"2023-11-29T11:00:00-05:00","event_time_end":"2023-11-29T12:30:00-05:00","event_time_end_last":"2023-11-29T12:30:00-05:00","gmt_time_start":"2023-11-29 16:00:00","gmt_time_end":"2023-11-29 17:30:00","gmt_time_end_last":"2023-11-29 17:30:00","rrule":null,"timezone":"America\/New_York"},"location":"Virtual","extras":[],"groups":[{"id":"221981","name":"Graduate Studies"}],"categories":[],"keywords":[{"id":"100811","name":"Phd Defense"}],"core_research_areas":[],"news_room_topics":[],"event_categories":[{"id":"1788","name":"Other\/Miscellaneous"}],"invited_audience":[{"id":"78771","name":"Public"}],"affiliations":[],"classification":[],"areas_of_expertise":[],"news_and_recent_appearances":[],"phone":[],"contact":[],"email":[],"slides":[],"orientation":[],"userdata":""}}}