Influence of Global Explanations on Human Supervision and Trust in Agent
Explainable AI for human supervision over firefighting robots
D.V. Pandeva (TU Delft - Electrical Engineering, Mathematics and Computer Science)
R.S. Verhagen – Mentor (TU Delft - Interactive Intelligence)
M.L. Tielman – Mentor (TU Delft - Interactive Intelligence)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
With the rise of AI presence in various contexts and spheres of life, ensuring effective
human-AI collaboration, especially in critical domains, is of utmost importance.
Explanations given by AI agent can be of great assistance for this purpose. This study
investigates the impact of global explanations, explaining general allocation rules, on
human supervision and trust in AI within the critical domain of a firefighting scenario,
where human and AI agent have to collaborate to save victims. To this end, a user
study involving 40 participants was performed. The user study compared the baseline
and global explanation scenario and the participants’ trust in the AI and explanation
satisfaction were measured. The results indicated no significant differences between
the two types of explanations, and in fact both achieved similar satisfactory outcomes.
This suggests comparable effectiveness of global explanations in enhancing human-AI
collaboration. Essentially, the insights of this study underscore the need for further
exploration into contextual factors influencing the impact of global explanations and
contribute to designing better human-AI teaming systems in dynamic and ethically
sensitive environments.