Agent Allocation of Moral Decisions in Human-Agent Teams

Raise Human Involvement and Explain Potential Consequences

Conference Paper (2025)
Author(s)

R.S. Verhagen (TU Delft - Interactive Intelligence)

Mark Neerincx (TNO - locatie Soesterberg, TU Delft - Interactive Intelligence)

M.L. Tielman (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
DOI related publication
https://doi.org/10.1145/3715275.3732157
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Interactive Intelligence
Pages (from-to)
2302-2317
ISBN (electronic)
979-8-4007-1482-5
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Humans and artificial intelligence agents increasingly collaborate in morally sensitive situations such as firefighting. These agents can often perform tasks with minimal human control, challenging accountability and responsibility. Combining higher agent autonomy levels with meaningful human control can address such challenges. For example, agents can allocate decisions to themselves in less morally sensitive situations and to humans in more sensitive ones. However, how to responsibly and effectively design and implement agents for this dynamic task allocation remains unclear, with their autonomy level and provided explanations being crucial considerations. Therefore, we conducted experiments in simulated firefighting environments where participants (n = 72) collaborated with a more and less autonomous artificial moral agent. These agents provided no additional information, feature contributions, or potential consequences when allocating decision-making. Our results show that moral trust, agreement, and meaningful human control are higher when the agent is less autonomous. Furthermore, people disagree and reallocate decisions to themselves more when the agents explain potential consequences, especially when moral sensitivity is higher. Overall, our findings highlight that people prefer more involvement over higher agent autonomy and take on greater moral responsibility when agents explain potential consequences. These actionable insights are crucial for designing transparent artificial moral agents that enhance human moral awareness and responsibility. Ultimately, this supports the responsible implementation of dynamic task allocation in practice and enhances human-agent collaboration in morally sensitive situations.