Exploring the Effect of Automation Failure on the Human’s Trustworthiness in Human-Agent Teamwork

Master Thesis (2022)
Author(s)

N.H. Bouman (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

ML Tielman – Mentor (TU Delft - Interactive Intelligence)

Carolina Centeio Jorge – Mentor (TU Delft - Interactive Intelligence)

C.M. Jonker – Graduation committee member (TU Delft - Interactive Intelligence)

Jie Yang – Graduation committee member (TU Delft - External organisation)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2022 Nikki Bouman
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Nikki Bouman
Graduation Date
07-11-2022
Awarding Institution
Delft University of Technology
Programme
Computer Science and Engineering
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Collaboration in teams composed of both humans and automations has an interdependent nature, which demands calibrated trust among all the teammembers. For building suitable autonomous teammates, we need to study how trust and trustworthiness function in such teams. In particular, automations occasionally fail to do their job, which leads to a decrease in human’s trust. However, research has given contradictory statements about the effects of such a reduction of trust on the human’s trustworthiness, i.e. human’s characteristics that make them more or less reliable to the automation. As such, this study investigates how automation failure in a human-automation teamwork scenario affects the human’s trust in the automation and human’s trustworthiness towards the automation. We present a between-subjects controlled experiment in which the participants perform a simulated task in a 2D grid-world, collaborating with an automation in a “moving-out” scenario. During the experiment, we measure the participants’ trust and trustworthiness regarding the automation both subjectively and objectively. Our results show that automation failure negatively affects the human’s trustworthiness, as well as their trust in and liking of the automation. Learning the effects of automation failure in trust and trustworthiness can contribute to a better understanding of the nature and dynamics of trust in these teams, foreseeing undesirable consequences and improving human-automation teamwork.

Files

License info not available