Tailoring User-Aware Agent Explanations to Properly Align Human Trust

More Info
expand_more

Abstract

Aligning human trust to correspond with an agent's trustworthiness is an essential collaborative element within Human-Agent Teaming (HAT). Misalignment of trust could cause sub-optimal usage of the agent. Trust can be influenced by providing explanations which clarify the agent's actions. However, research often approaches explanations statically, making them not adjustable to real-time situations. In this research, we study the effectiveness of an agent capable of modelling human trust and tailoring explanations to influence it. We achieve this by modifying an existing HAT environment and setting up an experiment comparing a trust and baseline agent. Modelling human trust is calculated through the number of suggestions ignored. When the model estimates low trust, more explanation types are used during communication. Higher trust uses fewer explanation types in order to save time. However, the results indicate no difference between the baseline and trust agent rejecting the hypothesis. A potential cause for the rejection can be found in either a flaw in the agents' design or information overload.