Using Human Workload to Adjust Agent Explanations in Human-agent Teamwork
Z. LEI (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Ruben S. Verhagen – Mentor (TU Delft - Interactive Intelligence)
Myrthe L. Tielman – Mentor (TU Delft - Interactive Intelligence)
A. Nadeem – Graduation committee member (TU Delft - Cyber Security)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Artificial intelligence systems assist humans in more and more cases. However, such systems' lack of explainability could lead to a bad performance of the teamwork, as humans might not cooperate with or trust systems with black-box algorithms opaque to them. This research attempts to improve the explainability of artificial intelligence systems by proposing a framework which models human workload in a value and tailors explanations to this value. Such explanations could provide agents' confidence, causes of making decisions and counterfactual parts to support their suggestions and are adjusted according to agents' knowledge of humans. Results show that adjusted explanations could improve participants' subjective trust in agents and make participants' take more suggestions, while no impact on collaboration fluency or teamwork performance is found.