Using Human Workload to Adjust Agent Explanations in Human-agent Teamwork

More Info
expand_more

Abstract

Artificial intelligence systems assist humans in more and more cases. However, such systems' lack of explainability could lead to a bad performance of the teamwork, as humans might not cooperate with or trust systems with black-box algorithms opaque to them. This research attempts to improve the explainability of artificial intelligence systems by proposing a framework which models human workload in a value and tailors explanations to this value. Such explanations could provide agents' confidence, causes of making decisions and counterfactual parts to support their suggestions and are adjusted according to agents' knowledge of humans. Results show that adjusted explanations could improve participants' subjective trust in agents and make participants' take more suggestions, while no impact on collaboration fluency or teamwork performance is found.