Human-agent teamwork (HAT) is becoming increasingly prevalent in fields such as search and rescue (SAR), where effective collaboration between humans and artificial agents is crucial. Previous studies have shown that trust plays a pivotal role in the success of HATs, influencing
...
Human-agent teamwork (HAT) is becoming increasingly prevalent in fields such as search and rescue (SAR), where effective collaboration between humans and artificial agents is crucial. Previous studies have shown that trust plays a pivotal role in the success of HATs, influencing decision-making, communication, and potentially overall team performance.
This research investigates the impact of agent-provided explanations about the agent's trust in humans (artificial trust) and corresponding behavior changes on human trust in the agent and their satisfaction with explanations during a simulated SAR task. Two types of explanations were explored: Trust-Explained (TE) explanations, where the agent explains its trust level and trust-based decisions, and Trust-Unexplained (TU) explanations, which solely describe the agent’s behavior without reference to trust dynamics. Besides, this research also investigates the correlation between human trust and explanation satisfaction, and in the end, whether the differences in the provided explanations result in differences in team performance and artificial trust.
The study involved 40 participants divided into two groups: an experimental group (the trust-enhanced explanation group) receiving TE explanations and a control group (the non-trust explanation group) receiving TU explanations. Participants' trust in the agent, satisfaction with the explanations, and team performance and artificial trust were measured and analyzed. Contrary to initial expectations, no statistically significant differences in explanation satisfaction and human trust in the agent were found between the two groups. However, a strong positive correlation was observed between participants' satisfaction with the explanations and their trust in the agent, indicating that explanation quality plays a crucial role in human trust development. Furthermore, no significant differences in team performance were detected, suggesting that trust explanations may not directly influence task outcomes. In the analysis of artificial trust, the agent in the trust-enhanced explanation group exhibited more conservative adjustments in trust levels compared to the non-trust explanation group. This conservative approach may have influenced players in the trust-enhanced explanation group to adopt a more cautious or deliberate decision-making process, potentially prioritizing the comprehension of explanations over the optimization of task performance.
For future research, it may be worth delving deeper into the influence of trust explanations on user behavior, the more complex HAT task environments, the relationship between artificial trust and user behavior, the dynamic and adaptive explanations, and the causal relationship between explanation satisfaction and human trust in the agent to understand further how trust can be fostered in HAT.