Assisting agent’s effect on the trustworthiness of their human teammate

Bachelor Thesis (2022)
Author(s)

A. Delia (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Myrthe L. Tielman – Mentor (TU Delft - Interactive Intelligence)

C. Centeio Jorge – Mentor (TU Delft - Interactive Intelligence)

N. Tomen – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2022 Alto Delia
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Alto Delia
Graduation Date
23-06-2022
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Collaborative AI (CAI) is a fast growing field of study. Cooperation between teams composed of humans and artificial intelligence needs to be principled and founded on reciprocal trust. Modelling the trustworthiness of humans is a difficult task because of the ambiguous nature of its definition as well as the effect of team work dynamic. This research defines and measures human trustworthiness within the context of human-AI collaboration and tests how the artificial intelligence agent’s action of offering help plays a role in it. The experiment is conducted through the MATRX framework, in which a human agent and an AI agent collaborate in order to search and rescue victims within the environment. The ABI trust model is used to determine the sub-components that define trustworthiness, which is ability, benevolence and integrity. Trustworthiness is measured in 2 ways, through objective measures which represent the AI agent’s measure of the human trustworthiness and subjective measures that represent the human’s measure of their own trustworthiness. The results show that help offered by the AI agent, improves the ability and benevolence of the human agent in objective metrics but not integrity. Subjective results show no statistically significant change. The research concludes that the trustworthiness perceived by the AI agent is indeed improved, but does not provide evidence of the same from the perception of the human agent.

Files

Research_Paper_Final_5.pdf
(pdf | 0.66 Mb)
License info not available