Artificial Trust as a Tool in Human-AI Teams

Conference Paper (2022)
Author(s)

C. Centeio Jorge (TU Delft - Interactive Intelligence)

M.L. Tielman (TU Delft - Interactive Intelligence)

C.M. Jonker (Universiteit Leiden, TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
Copyright
© 2022 C. Centeio Jorge, M.L. Tielman, C.M. Jonker
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 C. Centeio Jorge, M.L. Tielman, C.M. Jonker
Research Group
Interactive Intelligence
Pages (from-to)
1155-1157
ISBN (print)
978-1-5386-8554-9
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Mutual trust is considered a required coordinating mechanism for achieving effective teamwork in human teams. However, it is still a challenge to implement such mechanisms in teams composed by both humans and AI (human-AI teams), even though those are becoming increasingly prevalent. Agents in such teams should not only be trustworthy and promote appropriate trust from the humans, but also know when to trust a human teammate to perform a certain task. In this project, we study trust as a tool for artificial agents to achieve better team work. In particular, we want to build mental models of humans so that agents can understand human trustworthiness in the context of human-AI teamwork, taking into account factors such as human teammates', task's and environment's characteristics.

Files

3523760.3523956_1_.pdf
(pdf | 0.251 Mb)
- Embargo expired in 07-09-2022
License info not available