Communicating Trust Beliefs and Decisions in Human-AI Teams
T. Şahin (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Myrthe Lotte Tielman – Mentor (TU Delft - Interactive Intelligence)
C. Centeio Jorge – Mentor (TU Delft - Interactive Intelligence)
Ujwal Gadiraju – Graduation committee member (TU Delft - Web Information Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
As technological capabilities progress, there is a growing imperative to enhance collaborative dynamics within human-agent teams. Artificial agents and humans possess capabilities that compensate for each other’s limitations. This paper outlines the effect of communication using real-time textual explanations of the artificial agents’ mental model of its trust in the human teammate. An experiment (n = 40) was conducted by examining the impact of real-time textual explanations of the artificial agents’ trust beliefs. The participants collaborated with an artificial agent during a search and rescue mission in a 2D grid world, where they had to rescue six victims in a 10 minute time frame. The results show that real-time textual explanations did affect the natural trust and satisfaction positively compared to the baseline method.