The impact of expressing emotion within explainable AI in human-agent teamwork

More Info
expand_more

Abstract

With the increasing development of artificial intelligence (AI), there is a more significant opportunity for humans and agents to collaborate in teamwork. In Human-Agent Teamwork (HAT) settings, collaboration requires communication, and the agent displaying emotion can impact how human teammates communicate and work together with the agent. This study investigated the impact of an explainable agent expressing emotion within explanations in a teamwork setting. We investigated how integrating an emotional component into an agent’s explanations influences trust in the agent, as well as humans’ perceptions of the agent’s anthropomorphism, animacy, likeability, and overall team performance when collaborating with the agent. With this goal, a pre-study was conducted using a focus-group meeting to investigate the relevant emotions to display in a simulated Search and Rescue (SAR) task and how these emotions can be incorporated into Explainable AI (XAI). Next, we conducted an in-between subject controlled experiment to study the effects of emotional components in explanations. The participants were divided into experimental and control groups, collaborating with agents that either displayed emotion or no emotion. The participants had to carry out a SAR task where they worked together with the agent to rescue victims. Our results confirmed that an agent displaying emotions increased perceived likeability, animacy, and anthropomorphism. Among these three, likeability and animacy are positively associated with trust. In contrast, an increase in anthropomorphism is associated with a decrease in trust. From the results, we could not conclude that team performance is directly affected by having emotion in the explanation. However, the results showed that emotion increases the messages sent from the human to the agent, and this increase in communication led to higher team performance.