Using Human Workload to Adjust Agent Explanations in Human-agent Teamwork

Bachelor Thesis (2022)
Author(s)

Z. LEI (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Ruben S. Verhagen – Mentor (TU Delft - Interactive Intelligence)

Myrthe L. Tielman – Mentor (TU Delft - Interactive Intelligence)

A. Nadeem – Graduation committee member (TU Delft - Cyber Security)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2022 Zhiqiang LEI
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Zhiqiang LEI
Graduation Date
22-06-2022
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Artificial intelligence systems assist humans in more and more cases. However, such systems' lack of explainability could lead to a bad performance of the teamwork, as humans might not cooperate with or trust systems with black-box algorithms opaque to them. This research attempts to improve the explainability of artificial intelligence systems by proposing a framework which models human workload in a value and tailors explanations to this value. Such explanations could provide agents' confidence, causes of making decisions and counterfactual parts to support their suggestions and are adjusted according to agents' knowledge of humans. Results show that adjusted explanations could improve participants' subjective trust in agents and make participants' take more suggestions, while no impact on collaboration fluency or teamwork performance is found.

Files

Final_paper_Zhqiang_Lei.pdf
(pdf | 0.68 Mb)
License info not available