<node id="669222">
  <nid>669222</nid>
  <type>event</type>
  <uid>
    <user id="27707"><![CDATA[27707]]></user>
  </uid>
  <created>1693256656</created>
  <changed>1695057982</changed>
  <title><![CDATA[PhD Proposal by Upol Ehsan ]]></title>
  <body><![CDATA[<p><span><span><strong><span><span>Title:</span></span></strong><span><span>&nbsp;Human-centered Explainable AI&nbsp;</span></span></span></span></p>

<p><span><span><span>&nbsp;&nbsp;</span></span></span></p>

<p><span><span><span><strong><span><span>Date:</span></span></strong><span><span> September 21, 2023</span></span>&nbsp;</span></span></span></p>

<p><span><span><span><strong><span><span>Time:</span></span></strong><span><span> 4:00pm – 6:00pm EST</span></span>&nbsp;</span></span></span></p>

<p><span><span><span><strong><span><span>Location (fully virtual):</span></span></strong>&nbsp;<span><span><a href="https://us02web.zoom.us/j/89017607101?pwd=aGxtTEZjM0dna0VlTEpZcUtoWVpodz09" title="https://us02web.zoom.us/j/89017607101?pwd=aGxtTEZjM0dna0VlTEpZcUtoWVpodz09">Zoom Link</a></span></span><span><span> | Meeting ID: 890 1760 7101&nbsp;</span></span><span><span>Passcode: 860583&nbsp;</span></span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><span>&nbsp;&nbsp;</span></span></span></p>

<p><span><span><span><strong><span><span>Upol Ehsan</span></span></strong><strong>&nbsp;</strong></span></span></span></p>

<p><span><span><span><span><span>PhD Student in Computer Science</span></span>&nbsp;</span></span></span></p>

<p><span><span><span><span><span>School of Interactive Computing</span></span>&nbsp;</span></span></span></p>

<p><span><span><span><span><span>Georgia Institute of Technology</span></span>&nbsp;</span></span></span></p>

<p><span><span><span>&nbsp;&nbsp;</span></span></span></p>

<p><span><span><span><strong><span><span>Committee:</span></span></strong>&nbsp;</span></span></span></p>

<p><span><span><span><span><span>Dr. Mark O. Riedl (advisor) – School of Interactive Computing, Georgia Institute of Technology</span></span>&nbsp;</span></span></span></p>

<p><span><span><span><span><span>Dr. Munmun De Choudhury&nbsp;</span></span><span><span><span>–&nbsp;</span></span></span><span><span>School of Interactive Computing, Georgia Institute of Technology</span></span>&nbsp;</span></span></span></p>

<p><span><span><span><span><span>Dr. Sashank Varma – School of Interactive Computing and School of Psychology, Georgia Institute of Technology</span></span>&nbsp;</span></span></span></p>

<p><span><span><span><span><span>Dr. Q. Vera Liao – Microsoft Research</span></span>&nbsp;</span></span></span></p>

<p><span><span><span><span><span>Dr. Michael Muller – IBM Research</span></span>&nbsp;</span></span></span></p>

<p><span><span><span>&nbsp;&nbsp;</span></span></span></p>

<p><span><span><span><strong><span><span>Summary:</span></span></strong>&nbsp;</span></span></span></p>

<p><span><span><span><span>If AI systems are going to inform consequential decisions such as deciding whether you should get a loan or receive an organ transplant, they must be explainable to <em>everyone</em>, not just software engineers. Despite commendable technical progress in “opening” the black-box of AI, the prevailing algorithm-centered Explainable AI (XAI) view overlooks a vital insight: <em>who </em>opens the black-box matters just as much as opening it. As a result of this blind spot, many popular XAI interventions have been ineffective and even harmful in real-world settings. &nbsp;</span></span></span></span></p>

<p><span><span><span><span>To address the blind spot, my dissertation introduces and operationalizes <strong><em>Human-centered XAI</em></strong><em> (HCXAI),</em> a holistic sociotechnical and human-centered paradigm of AI explainability.&nbsp;</span></span></span></span></p>

<p>&nbsp;</p>

<p><span><span><strong><span><span>Thesis statement:&nbsp;</span></span></strong><span><span>With a focus on non-AI experts, this dissertation demonstrates how Human-centered XAI:&nbsp;</span></span></span></span></p>

<ol>
	<li>&nbsp;</li>
	<li><span><span><span><span><span>expands the design space of XAI by broadening the domain of non-algorithmic factors that augment AI explainability&nbsp; </span></span></span></span></span></li>
	<li><span><span><span><span><span>enriches our knowledge of the importance of “who” the humans are in XAI design &nbsp; </span></span></span></span></span></li>
	<li><span><span><span><span><span>enables resourceful ways to do Responsible AI by providing proactive mitigation strategies through participatory methods&nbsp;</span></span></span></span></span></li>
</ol>

<p><span><span><span><span>It contributes 1) <em>conceptually: </em>new concepts such as such as Social Transparency that showcase <em>how </em>to encode socio-organizational context to augment explainability without changing the internal model; 2) <em>methodologically: </em>human-centered evaluation of XAI, actionable frameworks, and participatory methods to co-design XAI systems; 3) <em>technically: </em>computational techniques and design artifacts; 4) <em>empirically: </em>findings such as how one’s AI background impacts one’s interpretation of AI explanations, user perceptions of real AI users, and how AI explanations can negatively impact users despite our best intentions. &nbsp;</span></span></span></span></p>

<p><span><span>&nbsp;</span></span></p>

<p><span><span><span><span>The proposed work takes a participatory approach to extend Social Transparency into Radiation Oncology, a high-stakes and complex domain. The goal is to extend Social Transparency conceptually and practically while gaining a deeper understanding of the XAI needs of Radiation Oncologists to inform HCXAI design of future systems.&nbsp;</span></span></span></span></p>

<p><span><span>&nbsp;</span></span></p>

<p><span><span><span><span>The dissertation expands the XAI discourse from an algorithm-centered perspective to a human-centered one. It takes a foundational step towards creating a future where <em>anyone, regardless of their background, can interact with AI systems in an explainable, accountable, and dignified manner.</em><strong><em> &nbsp;</em></strong></span></span></span></span></p>

<p><span><span>&nbsp;</span></span></p>

<p>&nbsp;</p>
]]></body>
  <field_summary_sentence>
    <item>
      <value><![CDATA[Human-centered Explainable AI ]]></value>
    </item>
  </field_summary_sentence>
  <field_summary>
    <item>
      <value><![CDATA[<p><span><span><span>Human-centered Explainable AI</span></span></span><span><span><span>&nbsp;</span></span></span></p>
]]></value>
    </item>
  </field_summary>
  <field_time>
    <item>
      <value><![CDATA[2023-09-21T16:00:00-04:00]]></value>
      <value2><![CDATA[2023-09-21T18:00:00-04:00]]></value2>
      <rrule><![CDATA[]]></rrule>
      <timezone><![CDATA[America/New_York]]></timezone>
    </item>
  </field_time>
  <field_fee>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_fee>
  <field_extras>
      </field_extras>
  <field_audience>
          <item>
        <value><![CDATA[Public]]></value>
      </item>
      </field_audience>
  <field_media>
      </field_media>
  <field_contact>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_contact>
  <field_location>
    <item>
      <value><![CDATA[REMOTE]]></value>
    </item>
  </field_location>
  <field_sidebar>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_sidebar>
  <field_phone>
    <item>
      <value><![CDATA[]]></value>
    </item>
  </field_phone>
  <field_url>
    <item>
      <url><![CDATA[]]></url>
      <title><![CDATA[]]></title>
            <attributes><![CDATA[]]></attributes>
    </item>
  </field_url>
  <field_email>
    <item>
      <email><![CDATA[]]></email>
    </item>
  </field_email>
  <field_boilerplate>
    <item>
      <nid><![CDATA[]]></nid>
    </item>
  </field_boilerplate>
  <links_related>
      </links_related>
  <files>
      </files>
  <og_groups>
          <item>221981</item>
      </og_groups>
  <og_groups_both>
          <item><![CDATA[Graduate Studies]]></item>
      </og_groups_both>
  <field_categories>
          <item>
        <tid>1788</tid>
        <value><![CDATA[Other/Miscellaneous]]></value>
      </item>
      </field_categories>
  <field_keywords>
          <item>
        <tid>102851</tid>
        <value><![CDATA[Phd proposal]]></value>
      </item>
      </field_keywords>
  <userdata><![CDATA[]]></userdata>
</node>
