Modelling Trust in Human-AI Interaction

Doctoral Consortium

Abstract (2021)
Author(s)

S. Mehrotra (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
More Info
expand_more
Publication Year
2021
Language
English
Related content
Research Group
Interactive Intelligence
Pages (from-to)
1826-1828
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Trust is an important element of any interaction, but especially when we are interacting with a piece of technology which does not think like we do. Therefore, AI systems need to understand how humans trust them, and what to do to promote appropriate trust. The aim of this research is to study trust through both a formal and social lens. We will be working on formal models of trust, but with a focus on the social nature of trust in order to represent how humans trust AI. We will then employ methods from human computer interaction research to study if these models work in practice, and what would eventually be necessary for systems to elicit appropriate levels of trust from their users. The context of this research will be AI agents which interact with their users to offer personal support.

Files

P1826.pdf
(pdf | 0.909 Mb)
License info not available