Print Email Facebook Twitter Assessing artificial trust in human-agent teams Title Assessing artificial trust in human-agent teams: A conceptual model Author Centeio Jorge, C. (TU Delft Interactive Intelligence) Tielman, M.L. (TU Delft Interactive Intelligence) Jonker, C.M. (TU Delft Interactive Intelligence; Universiteit Leiden) Date 2022 Abstract As intelligent agents are becoming human's teammates, not only do humans need to trust intelligent agents, but an intelligent agent should also be able to form artificial trust, i.e. a belief regarding human's trustworthiness. We see artificial trust as the beliefs of competence and willingness, and we study which internal factors (krypta) of the human may play a role when assessing artificial trust. Furthermore, we investigate which observable measures (manifesta) an agent may take into account as cues for the human teammate's krypta. This paper proposes a conceptual model of artificial trust for a specific task during human-agent teamwork. Our model proposes observable measures related to human trustworthiness (ability, benevolence, integrity) and strategy (perceived cost and benefit) as predictors for willingness and competence, based on literature and a preliminary user study. Subject trustworthinessartificial trustintelligent agentshuman-agent collaborationHuman-agent interactiontrustTrust metricsHuman-Agent TeamingHuman-agent teamwork To reference this document use: http://resolver.tudelft.nl/uuid:929223fe-27f9-4540-8a7b-f919ee9d8343 DOI https://doi.org/10.1145/3514197.3549696 ISBN 978-1-4503-9248-8 Source IVA 2022 - Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents Series IVA 2022 - Proceedings of the 22nd ACM International Conference on Intelligent Virtual Agents Part of collection Institutional Repository Document type conference paper Rights © 2022 C. Centeio Jorge, M.L. Tielman, C.M. Jonker Files PDF 3514197.3549696.pdf 757.04 KB Close viewer /islandora/object/uuid:929223fe-27f9-4540-8a7b-f919ee9d8343/datastream/OBJ/view