With the rise of AI presence in various contexts and spheres of life, ensuring effective
human-AI collaboration, especially in critical domains, is of utmost importance.
Explanations given by AI agent can be of great assistance for this purpose. This study
investigate
...
With the rise of AI presence in various contexts and spheres of life, ensuring effective
human-AI collaboration, especially in critical domains, is of utmost importance.
Explanations given by AI agent can be of great assistance for this purpose. This study
investigates the impact of global explanations, explaining general allocation rules, on
human supervision and trust in AI within the critical domain of a firefighting scenario,
where human and AI agent have to collaborate to save victims. To this end, a user
study involving 40 participants was performed. The user study compared the baseline
and global explanation scenario and the participants’ trust in the AI and explanation
satisfaction were measured. The results indicated no significant differences between
the two types of explanations, and in fact both achieved similar satisfactory outcomes.
This suggests comparable effectiveness of global explanations in enhancing human-AI
collaboration. Essentially, the insights of this study underscore the need for further
exploration into contextual factors influencing the impact of global explanations and
contribute to designing better human-AI teaming systems in dynamic and ethically
sensitive environments.