<node id="671029">
  <nid>671029</nid>
  <type>event</type>
  <uid>
    <user id="27707"><![CDATA[27707]]></user>
  </uid>
  <created>1699887899</created>
  <changed>1699887899</changed>
  <title><![CDATA[PhD Defense by Yen-Cheng Liu]]></title>
  <body><![CDATA[<p><span><span><span><strong><span><span>Title: </span></span></strong><span><span>Efficient Visual Learning for Scene Understanding</span></span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><span><strong><span><span>Date: </span></span></strong><span><span>Tuesday, November 21, 2023</span></span></span></span></span></p>

<p><span><span><span><strong><span><span>Time: </span></span></strong><span><span>12:00 - 1:30 pm EST / 9:00 - 10:30 am PST</span></span></span></span></span></p>

<p><span><span><span><strong><span><span>Location:</span></span></strong> <span><span><span><span><a href="https://gatech.zoom.us/j/7745230525">https://gatech.zoom.us/j/7745230525</a></span></span></span></span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><span><strong><span>Yen-Cheng Liu</span></strong></span></span></span></p>

<p><span><span><span><span>Machine Learning PhD Candidate</span></span></span></span></p>

<p><span><span><span><span>School of Electrical and Computer Engineering</span></span></span></span></p>

<p><span><span><span><span>Georgia Institute of Technology</span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><span><strong><span>Committee</span></strong></span></span></span></p>

<ol>
	<li><span><span><span><span><span><span><span>Dr. Zsolt Kira (Advisor),&nbsp;<span>School of Interactive Computing, Georgia Tech</span></span></span></span></span></span></span></span></li>
	<li><span><span><span><span><span><span><span>Dr. Judy Hoffman,&nbsp;</span><span><span>School of Interactive Computing, Georgia Tech</span></span></span></span></span></span></span></span></li>
	<li><span><span><span><span><span><span><span>Dr. Larry Heck, School of Electrical and Computer Engineering and the School of Interactive Computing,&nbsp;</span><span>Georgia Tech</span></span></span></span></span></span></span></li>
	<li><span><span><span><span><span><span><span>Dr. Mark Davenport,&nbsp;</span><span><span>School of Electrical and Computer Engineering, Georgia Tech</span></span></span></span></span></span></span></span></li>
	<li><span><span><span><span><span><span><span>Dr. Diyi Yang,&nbsp;Computer Science Department, Stanford University</span></span></span></span></span></span></span></li>
</ol>

<p>&nbsp;</p>

<p><span><span><span><strong><span>Abstract</span></strong></span></span></span></p>

<p><span><span><span><span>Significant advancements in scene understanding have been driven by deep neural networks. These learning-based frameworks enhance performance through extensive training datasets and a large number of trainable parameters. However, they are less scalable and require substantial computational and financial resources. This dissertation investigates two aspects of efficient visual learning for scene understanding: label-efficient learning and parameter-efficient learning. To reduce label supervision in instance-level scene understanding tasks, we develop a series of semi-supervised learning frameworks. These frameworks improve the label efficiency under various detector architectures and unconstrained data settings. To reduce parameter usage in multi-task training, we re-evaluate parameter-efficient methods from NLP for scene understanding and then propose a more parameter-efficient method for vision architectures. These advancements demonstrate the practicality and adaptability of efficient learning frameworks in diverse, resource-constrained environments.</span></span></span></span></p>
]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[ Efficient Visual Learning for Scene Understanding]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p><strong>&nbsp;</strong><span><span><span>Efficient Visual Learning for Scene Understanding</span></span></span></p>
]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2023-11-21T12:00:00-05:00]]></value>
      <value2><![CDATA[2023-11-21T13:30:00-05:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[ZOOM]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>221981</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Graduate Studies]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>100811</tid>
        <value><![CDATA[Phd Defense]]></value>
      </item>
      </field_keywords>
  <userdata><![CDATA[]]></userdata>
</node>
