Influence of Global Explanations on Human Supervision and Trust in Agent

Explainable AI for human supervision over firefighting robots

Bachelor Thesis (2024)
Author(s)

D.V. Pandeva (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

R.S. Verhagen – Mentor (TU Delft - Interactive Intelligence)

M.L. Tielman – Mentor (TU Delft - Interactive Intelligence)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2024
Language
English
Graduation Date
27-06-2024
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

With the rise of AI presence in various contexts and spheres of life, ensuring effective
human-AI collaboration, especially in critical domains, is of utmost importance.
Explanations given by AI agent can be of great assistance for this purpose. This study
investigates the impact of global explanations, explaining general allocation rules, on
human supervision and trust in AI within the critical domain of a firefighting scenario,
where human and AI agent have to collaborate to save victims. To this end, a user
study involving 40 participants was performed. The user study compared the baseline
and global explanation scenario and the participants’ trust in the AI and explanation
satisfaction were measured. The results indicated no significant differences between
the two types of explanations, and in fact both achieved similar satisfactory outcomes.
This suggests comparable effectiveness of global explanations in enhancing human-AI
collaboration. Essentially, the insights of this study underscore the need for further
exploration into contextual factors influencing the impact of global explanations and
contribute to designing better human-AI teaming systems in dynamic and ethically
sensitive environments.

Files

License info not available