Communicating trust-based beliefs and decisions in human-AI teams using visual summaries of explanations
S. Marossi (TU Delft - Electrical Engineering, Mathematics and Computer Science)
C. Centeio Jorge – Mentor (TU Delft - Interactive Intelligence)
Myrthe Lotte Tielman – Mentor (TU Delft - Interactive Intelligence)
Ujwal Gadiraju – Graduation committee member (TU Delft - Web Information Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Human-agent teams (HATs) are becoming more prevalent in our current world, necessitating mutual trust between humans and machines. This trust is split into artificial trust (agents trusting humans) and natural trust (humans trusting agents). Both types must be facilitated for effective teamwork. It is hypothesized that communicating artificial trust effectively helps develop natural trust and overall satisfaction. A visual summary of explanations is proposed to serve as an effective communication method. Summaries allow for in-depth information processing, and visual representations are quicker to interpret. This paper examines the impact of using a visual summary of explanations to communicate the agent’s trust beliefs on the human teammate’s natural trust and overall satisfaction within HATs. An experiment (n=40) was conducted to study this effect. Participants collaborated with an artificial agent during an urban search and rescue operation in a simulated 2D grid-world environment. Results show that the inclusion of a visual summary increases the human teammate’s trust in the agent alongside their overall satisfaction. The paper emphasizes the need for further research with longitudinal studies to measure the long-term effectiveness of communicating artificial trust.