How do adaptive explanations that become more abstract over time influence human supervision over and trust in the robot?

Bachelor Thesis (2024)
Author(s)

E. Ibanez (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

R.S. Verhagen – Mentor (TU Delft - Interactive Intelligence)

M.L. Tielman – Mentor (TU Delft - Interactive Intelligence)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2024
Language
English
Graduation Date
30-06-2024
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

As human-agent collaboration grows increasingly prevalent, it is crucial to understand and enhance the interaction between humans and AI systems. Explainable AI is
fundamental to this interaction, which involves agents conveying essential information to humans for decision-making. This paper investigates how adaptive explanations affect human supervision and trust in robotic systems. The study included 40 participants and compared baseline (non-adaptive) explanations with adaptive explanations. The results showed no significant difference between the two types of explanations; making explanations more abstract did not necessarily improve human supervision or increase trust in robots.

Files

CSE3000_Final_Paper.pdf
(pdf | 0.997 Mb)
License info not available