Moral Decision Making in Human-Agent Teams

Human Control and the Role of Explanations

Journal Article (2021)
Author(s)

Jasper van der van der Waa (TU Delft - Interactive Intelligence, TU Delft - BUS/TNO STAFF)

Sabine Verdult (Training and Performance Innovations)

Karel van den van den Bosch (DIANA FEA )

Jurriaan van van Diggelen (DIANA FEA )

Tjalling Haije (DIANA FEA )

Birgit van der Stigchel (Radboud Universiteit Nijmegen, DIANA FEA )

Ioana Cocu (DIANA FEA )

Research Group
Interactive Intelligence
Copyright
© 2021 J.S. van der Waa, Sabine Verdult, Karel van den Bosch, Jurriaan van Diggelen, Tjalling Haije, Birgit van der Stigchel, Ioana Cocu
DOI related publication
https://doi.org/10.3389/frobt.2021.640647
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 J.S. van der Waa, Sabine Verdult, Karel van den Bosch, Jurriaan van Diggelen, Tjalling Haije, Birgit van der Stigchel, Ioana Cocu
Research Group
Interactive Intelligence
Volume number
8
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

With the progress of Artificial Intelligence, intelligent agents are increasingly being deployed in tasks for which ethical guidelines and moral values apply. As artificial agents do not have a legal position, humans should be held accountable if actions do not comply, implying humans need to exercise control. This is often labeled as Meaningful Human Control (MHC). In this paper, achieving MHC is addressed as a design problem, defining the collaboration between humans and agents. We propose three possible team designs (Team Design Patterns), varying in the level of autonomy on the agent’s part. The team designs include explanations given by the agent to clarify its reasoning and decision-making. The designs were implemented in a simulation of a medical triage task, to be executed by a domain expert and an artificial agent. The triage task simulates making decisions under time pressure, with too few resources available to comply with all medical guidelines all the time, hence involving moral choices. Domain experts (i.e., health care professionals) participated in the present study. One goal was to assess the ecological relevance of the simulation. Secondly, to explore the control that the human has over the agent to warrant moral compliant behavior in each proposed team design. Thirdly, to evaluate the role of agent explanations on the human’s understanding in the agent’s reasoning. Results showed that the experts overall found the task a believable simulation of what might occur in reality. Domain experts experienced control over the team’s moral compliance when consequences were quickly noticeable. When instead the consequences emerged much later, the experts experienced less control and felt less responsible. Possibly due to the experienced time pressure implemented in the task or over trust in the agent, the experts did not use explanations much during the task; when asked afterwards they however considered these to be useful. It is concluded that a team design should emphasize and support the human to develop a sense of responsibility for the agent’s behavior and for the team’s decisions. The design should include explanations that fit with the assigned team roles as well as the human cognitive state.