Explainable AI for Human Supervision over Firefighting Robots

How Do Textual and Visual Explanations Affect Human Supervision and Trust in the Robot?

Bachelor Thesis (2024)
Author(s)

B.C. Pietroianu (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Myrthe Tielman – Mentor (TU Delft - Interactive Intelligence)

Ruben S. Verhagen – Mentor (TU Delft - Interactive Intelligence)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2024
Language
English
Graduation Date
27-06-2024
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

As artificially intelligent agents become integrated into various sectors, they require
an analysis of their capacity to make moral decisions and of the influence of human
supervision on their performance. This study investigates the impact of textual feature
explanations on human supervision and the trust in a semi-autonomous firefighting
robot named Brutus, which operates in a morally complex environment. Grounded in
the field of Explainable AI (XAI), which seeks to render AI decisions transparent, this
research compares textual and visual explanations’ effectiveness in conveying situational
sensitivity during a simulated rescue operation. Through a detailed experimental setup
using the MATRX software to simulate a burning office building, participants’ trust
and understanding were assessed based on their interaction with Brutus using either
textual or visual explanations. This study contributes to the broader discourse on
AI ethics and the optimization of human-agent teaming in high-stakes scenarios. The
findings suggest that textual explanations can enhance human supervision and trust,
fostering greater engagement and satisfaction compared to visual explanations.

Files

License info not available