Designing Usable Explainable AI for Human–AI Disagreement and Trust
PhD project of Laura Spillner
When humans work together with AI, successful cooperation during joint decision making hinges on many factors – mutual understanding and appropriate levels of trust are first and foremost among those. Recent developments in AI are largely built on deep neural networks, of which the inner workings are not generally comprehensible to humans. The research field of explainable AI is concerned with making the output of such “black box” AI models more transparent. One goal of this is to make it easier for humans to notice errors or biased decisions by the AI, and judge more accurately whether or not a given output, prediction, or action by the AI is correct or not.
The problem of whether or not one should trust AI output appears in many domains: In AI-assisted decision making, humans have to review automated decisions suggested by AI models and decide whether to go along with the suggested decision, or overturn it. When everyday activities are supported by AI agents (which can range from chatbots and voice assistants to embodied robots), users must decide which tasks to delegate to AI and when to trust its actions. And when people converse with chatbots to gain information and advice, they must judge the extent to which they can trust the generated output and when they should question its veracity.
In all of those cases, making explanations for AI output accessible to end users it not just a technical question, but also a question of usability: how information is presented, how interaction is designed, how the existing knowledge the user is taken into account, and (in the case of natural language interaction) the language itself all impact our understanding and resulting behavior. This thesis deals with the problem of how to design usable explainable AI interaction – specifically concerning situations where human and AI disagree. How can we, on the one hand, model what knowledge and beliefs the user and the AI are likely to share, and, on the other hand, understand where disagreement is likely to originate? How can the AI communicate its level of (un)certainty in its output? What factors influence whether or not users trust the AI enough to change their mind based on its advice.
Contact
Contact:
Prof. Dr. Andreas Hepp
ZeMKI, Center for Media, Communication and Information Research
University of Bremen
Phone: +49 421 218-67620
Assistent Mrs. Schober: +49 421 218-67603
E-mail: andreas.hepp@uni-bremen.de







