The effects of an agent asking for help on a human's trustworthiness

Bachelor Thesis (2022)
Author(s)

C.A. Obame Obiang (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

C. Centeio Jorge – Mentor (TU Delft - Interactive Intelligence)

Myrthe L. Tielman – Mentor (TU Delft - Interactive Intelligence)

N. Tomen – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2022 Christopher Obame Obiang
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Christopher Obame Obiang
Graduation Date
23-06-2022
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

AI systems have the ability to complete tasks with greater precision and speed than humans, which has led to an increase in their usage. These systems are often grouped with humans in order to take advantage of the unique abilities of both the AI and the human. However, to make this cooperation as efficient as possible, there needs to be a mutual trust between humans and AIs. While there has been much research concerning the topic of human trust, there is a lack of work done concerning the trust that an artificial agent has toward its human partner. Given that a human must appear trustworthy to an artificial agent in order for that agent to trust him, and that demanding and offering help are important parts of a collaboration, the following research question has been formulated : How does an artificial agent asking a human for advice or help affect that human's trustworthiness ? To answer this research question, an experiment was conducted through an urban search and rescue game using the MATRX Software. Through this game, participants had to collaborate with a robot partner in order to accomplish the task of finding and rescuing 8 victims and delivering them to the correct drop zone. The participants were divided into a control group, who worked alongside a basic rescue robot, and an experimental group, which had a help-seeker robot as a partner. The help-seeker robot differed from the basic robot in its ability to ask the participant for advice, such as asking which room it should search for victims to rescue. Following the experiment, no significant results indicating a positive or negative effect on human trustworthiness by the help-seeker agent's behaviour were found.

Files

Final_Paper.pdf
(pdf | 0.722 Mb)
License info not available