<node id="667494">
  <nid>667494</nid>
  <type>event</type>
  <uid>
    <user id="28475"><![CDATA[28475]]></user>
  </uid>
  <created>1682358940</created>
  <changed>1682447153</changed>
  <title><![CDATA[Ph.D. Dissertation Defense - Daehuyn Kim]]></title>
  <body><![CDATA[<p><span><span><strong><span>Title</span></strong><em><span>:&nbsp; </span></em><em><span>Processing in memory architecture for neural networks and on-chip learning acceleration</span></em></span></span></p>

<p><span><span><strong><span>Committee:</span></strong></span></span></p>

<p><span><span><span>Dr. </span><span>Saibal Mukhopadhyay, ECE, Chair</span><span>, Advisor</span></span></span></p>

<p><span><span><span>Dr. </span><span>Shimeng Yu, ECE</span></span></span></p>

<p><span><span><span>Dr. </span><span>Tushar Krishna, ECE</span></span></span></p>

<p><span><span><span>Dr. </span><span>Visvesh Sathe, ECE</span></span></span></p>

<p><span><span><span>Dr. </span><span>Satish Kumar, ME</span></span></span></p>
]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Processing in memory architecture for neural networks and on-chip learning acceleration ]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p>These days, numerous devices like smart cameras, smart speakers, and wearable devices leverage machine learning. However, the neural networks employed for machine learning can be computationally expensive and require tremendous memory capacity. Furthermore, these devices require personalized neural networks for greater accuracy, but due to power, performance, area and memory constraints, it's challenging for edge devices to train these networks independently. Currently, most devices connect to a server for neural network training, leading to issues with security, data privacy, reliability, and compatibility. As edge devices become more prevalent, addressing these challenges is critical for ensuring the continued growth and success of machine learning applications in various fields. The objective of this research is to show that processing-in-memory (PIM) architecture with on-chip learning has high performance and is computationally efficient in accelerating machine learning applications. PIM architecture can effectively address the limitations faced by these resource-limited devices by allowing computation to occur within the memory itself, rather than relying on traditional Von Neumann architectures. This results in reduced data movement and lower energy consumption, while also improving overall performance. Furthermore, on-chip learning enables local training of machine learning models on the device itself, mitigating the issues related to security, data privacy, reliability, and compatibility that stem from server-side training. By combining PIM architecture and on-chip learning, this research aims to demonstrate a viable solution for enabling efficient and secure machine learning on resource-constrained devices. This thesis aims to explore the potential of PIM architecture with on-chip learning to address the growing demand for efficient hardware solutions for neural networks. By developing and demonstrating their capabilities through prototype chips, this work contributes to the advancement of neural network hardware design and provides valuable insights for future research in this area.</p>
]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2023-04-27T09:00:00-04:00]]></value>
      <value2><![CDATA[2023-04-27T11:00:00-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[Online]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
          <item>
        <url>https://teams.microsoft.com/l/meetup-join/19%3ameeting_M2U2NTExNDYtNzRmZC00YWUwLWJmN2QtNzQ0ZjU4OWJhZWQ2%40thread.v2/0?context=%7b%22Tid%22%3a%22482198bb-ae7b-4b25-8b7a-6d7f32faa083%22%2c%22Oid%22%3a%22426a92f0-977e-49fa-8078-3ccaebfa6e9c%22%7d</url>
        <link_title><![CDATA[Microsoft Teams Meeting link]]></link_title>
      </item>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>434381</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[ECE Ph.D. Dissertation Defenses]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>192484</tid>
        <value><![CDATA[PhD Defense, graduate students]]></value>
      </item>
      </field_keywords>
  <userdata><![CDATA[]]></userdata>
</node>
