Exploring the Effect of Automation Failure on the Human’s Trustworthiness in Human-Agent Teamwork

More Info
expand_more

Abstract

Collaboration in teams composed of both humans and automations has an interdependent nature, which demands calibrated trust among all the teammembers. For building suitable autonomous teammates, we need to study how trust and trustworthiness function in such teams. In particular, automations occasionally fail to do their job, which leads to a decrease in human’s trust. However, research has given contradictory statements about the effects of such a reduction of trust on the human’s trustworthiness, i.e. human’s characteristics that make them more or less reliable to the automation. As such, this study investigates how automation failure in a human-automation teamwork scenario affects the human’s trust in the automation and human’s trustworthiness towards the automation. We present a between-subjects controlled experiment in which the participants perform a simulated task in a 2D grid-world, collaborating with an automation in a “moving-out” scenario. During the experiment, we measure the participants’ trust and trustworthiness regarding the automation both subjectively and objectively. Our results show that automation failure negatively affects the human’s trustworthiness, as well as their trust in and liking of the automation. Learning the effects of automation failure in trust and trustworthiness can contribute to a better understanding of the nature and dynamics of trust in these teams, foreseeing undesirable consequences and improving human-automation teamwork.