How Should Your Artificial Teammate Tell You How Much It Trusts You?

Conference Paper (2025)
Author(s)

C. Centeio Jorge (TU Delft - Interactive Intelligence)

Elena Dumitrescu (Student TU Delft)

C.M. Jonker (TU Delft - Interactive Intelligence)

Razvan Loghin (Student TU Delft)

Sahar Marossi (Student TU Delft)

Elena Uleia (Student TU Delft)

M.L. Tielman (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
DOI related publication
https://doi.org/10.1145/3717511.3747086
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Interactive Intelligence
ISBN (print)
979-8-4007-1508-2
ISBN (electronic)
9798400715082
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Mutual trust between humans and interactive artificial agents is crucial for effective human-agent teamwork. This involves not only the human appropriately trusting the artificial teammate, but also the artificial teammate assessing the human’s trustworthiness for different tasks (i.e., artificial trust in human partners). Literature indicated that transparency and explainability is generally beneficial for human-agent collaboration. However, communicating artificial trust potentially affects human trust and satisfaction, which impact team dynamics. Towards studying these effects, we developed an artificial trust model and implemented five distinct communication approaches which varied in modality (visual/graphical and/or text), level (communication and/or explanation), and timing (real-time or occasional). We evaluated the effects of the different communication styles through a user study (N=120) in a 2D grid-world Search and Rescue scenario. Our results show that all our artificial trust explanations improved human trust and satisfaction, but the mere graphical communication of it did not. These results are bound to the specific scenario and context in which this study was run and require further exploration. As such, this work presents a first step towards understanding the consequences of communicating and explaining to a human teammate their assessed trustworthiness.