The Impact of Explanations of Artificial Trust on Human-Agent Teamwork

Master Thesis (2024)
Author(s)

Z. Guan (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

M.L. Tielman – Mentor (TU Delft - Interactive Intelligence)

M.A. Neerincx – Graduation committee member (TU Delft - Interactive Intelligence)

Ujwal Gadiraju – Graduation committee member (TU Delft - Web Information Systems)

Ruben S. Verhagen – Graduation committee member (TU Delft - Interactive Intelligence)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2024
Language
English
Graduation Date
17-12-2024
Awarding Institution
Delft University of Technology
Programme
['Computer Science | Artificial Intelligence']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Human-agent teamwork (HAT) is becoming increasingly prevalent in fields such as search and rescue (SAR), where effective collaboration between humans and artificial agents is crucial. Previous studies have shown that trust plays a pivotal role in the success of HATs, influencing decision-making, communication, and potentially overall team performance.

This research investigates the impact of agent-provided explanations about the agent's trust in humans (artificial trust) and corresponding behavior changes on human trust in the agent and their satisfaction with explanations during a simulated SAR task. Two types of explanations were explored: Trust-Explained (TE) explanations, where the agent explains its trust level and trust-based decisions, and Trust-Unexplained (TU) explanations, which solely describe the agent’s behavior without reference to trust dynamics. Besides, this research also investigates the correlation between human trust and explanation satisfaction, and in the end, whether the differences in the provided explanations result in differences in team performance and artificial trust.

The study involved 40 participants divided into two groups: an experimental group (the trust-enhanced explanation group) receiving TE explanations and a control group (the non-trust explanation group) receiving TU explanations. Participants' trust in the agent, satisfaction with the explanations, and team performance and artificial trust were measured and analyzed. Contrary to initial expectations, no statistically significant differences in explanation satisfaction and human trust in the agent were found between the two groups. However, a strong positive correlation was observed between participants' satisfaction with the explanations and their trust in the agent, indicating that explanation quality plays a crucial role in human trust development. Furthermore, no significant differences in team performance were detected, suggesting that trust explanations may not directly influence task outcomes. In the analysis of artificial trust, the agent in the trust-enhanced explanation group exhibited more conservative adjustments in trust levels compared to the non-trust explanation group. This conservative approach may have influenced players in the trust-enhanced explanation group to adopt a more cautious or deliberate decision-making process, potentially prioritizing the comprehension of explanations over the optimization of task performance.

For future research, it may be worth delving deeper into the influence of trust explanations on user behavior, the more complex HAT task environments, the relationship between artificial trust and user behavior, the dynamic and adaptive explanations, and the causal relationship between explanation satisfaction and human trust in the agent to understand further how trust can be fostered in HAT.

Files

My_thesis_version_6_27_.pdf
(pdf | 4.91 Mb)
License info not available