How do adaptive explanations that become more abstract over time influence human supervision over and trust in the robot?
E. Ibanez (TU Delft - Electrical Engineering, Mathematics and Computer Science)
R.S. Verhagen – Mentor (TU Delft - Interactive Intelligence)
M.L. Tielman – Mentor (TU Delft - Interactive Intelligence)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
As human-agent collaboration grows increasingly prevalent, it is crucial to understand and enhance the interaction between humans and AI systems. Explainable AI is
fundamental to this interaction, which involves agents conveying essential information to humans for decision-making. This paper investigates how adaptive explanations affect human supervision and trust in robotic systems. The study included 40 participants and compared baseline (non-adaptive) explanations with adaptive explanations. The results showed no significant difference between the two types of explanations; making explanations more abstract did not necessarily improve human supervision or increase trust in robots.