Mutual trust between humans and interactive artificial agents is crucial for effective human-agent teamwork. This involves not only the human appropriately trusting the artificial teammate, but also the artificial teammate assessing the human’s trustworthiness for different tasks
...
Mutual trust between humans and interactive artificial agents is crucial for effective human-agent teamwork. This involves not only the human appropriately trusting the artificial teammate, but also the artificial teammate assessing the human’s trustworthiness for different tasks (i.e., artificial trust in human partners). Literature indicated that transparency and explainability is generally beneficial for human-agent collaboration. However, communicating artificial trust potentially affects human trust and satisfaction, which impact team dynamics. Towards studying these effects, we developed an artificial trust model and implemented five distinct communication approaches which varied in modality (visual/graphical and/or text), level (communication and/or explanation), and timing (real-time or occasional). We evaluated the effects of the different communication styles through a user study (N=120) in a 2D grid-world Search and Rescue scenario. Our results show that all our artificial trust explanations improved human trust and satisfaction, but the mere graphical communication of it did not. These results are bound to the specific scenario and context in which this study was run and require further exploration. As such, this work presents a first step towards understanding the consequences of communicating and explaining to a human teammate their assessed trustworthiness.