Artificial Trust in Mutually Adaptive Human-Machine Teams
C. Centeio Jorge (TU Delft - Interactive Intelligence)
Ewart Jan de Visser (US Air Force (AFRL/EOARD), George Mason University)
ML Tielman (TU Delft - Interactive Intelligence)
C.M. Jonker (TU Delft - Interactive Intelligence)
Lionel P. Robert (University of Michigan)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
As machines' autonomy increases, their capacity to learn and adapt to humans in collaborative scenarios increases too. In particular, machines can use artificial trust (AT) to make decisions, such as task and role allocation/selection. However, the outcome of such decisions and the way these are communicated can affect the human's trust, which in turn affects how the human collaborates too. With the goal of maintaining mutual appropriate trust between the human and the machine in mind, we reflect on the requirements for having an AT-based decision-making model on an artificial teammate. Furthermore, we propose a user study to investigate the role of task-based willingness (e.g. human preferences on tasks) and its communication in AT-based decision-making.