The Impact of Tailoring Agent Explanations According to Human Performance on Human-AI Teamwork

Bachelor Thesis (2022)
Author(s)

C. Parlar (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

M.L. Tielman – Mentor (TU Delft - Interactive Intelligence)

R.S. Verhagen – Mentor (TU Delft - Interactive Intelligence)

A. Nadeem – Graduation committee member (TU Delft - Cyber Security)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2022 Can Parlar
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Can Parlar
Graduation Date
22-06-2022
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Nowadays, artificial intelligence (AI) systems are embedded in many aspects of our lives more than ever before. Autonomous AI systems (agents) are aiding people in mundane daily tasks, even outperforming humans in several cases. However, agents still depend on humans in unexpected circumstances. Thus, the main goal of these agents has transformed from becoming independent to interdependent systems, collaborating with humans. This collaboration is far from perfect and could be improved in several aspects. Communication is crucial for flawless collaboration and its key aspect is explainability. This paper studies the impact of tailoring explanations according to human performance in a well-defined collaborative human-agent teaming (HAT) urban search-and-rescue (USAR) task environment. A controlled experiment was conducted in a between-subject manner, with two different agent implementations, where it was hypothesised that when an agent provides explanations tailored to human performance, the collaborative performance, the trust towards the agent and the individual satisfaction of the human would increase. Results of the experiment confirmed that this is indeed the case for explanation satisfaction, however, not necessarily for trust and performance metrics. The conclusions also included that the tailoring resulted in a decreased collaborative performance. The research contributes to the bigger picture of how tailoring explanations to various factors, would have an impact on the overall collaborative performance and systematic actualisation of HAT.

Files

License info not available