Searched for: +
(1 - 9 of 9)
document
Centeio Jorge, C. (author), Tielman, M.L. (author), Jonker, C.M. (author)
Mutual trust is considered a required coordinating mechanism for achieving effective teamwork in human teams. However, it is still a challenge to implement such mechanisms in teams composed by both humans and AI (human-AI teams), even though those are becoming increasingly prevalent. Agents in such teams should not only be trustworthy and...
conference paper 2022
document
Centeio Jorge, C. (author), Mehrotra, S. (author), Tielman, M.L. (author), Jonker, C.M. (author)
In human-agent teams, how one teammate trusts another teammate should correspond to the latter's actual trustworthiness, creating what we would call appropriate mutual trust. Although this sounds obvious, the notion of appropriate mutual trust for human-agent teamwork lacks a formal definition. In this article, we propose a formalization which...
conference paper 2021
document
Mehrotra, S. (author), Jonker, C.M. (author), Tielman, M.L. (author)
As AI systems are increasingly involved in decision making, it also becomes important that they elicit appropriate levels of trust from their users. To achieve this, it is first important to understand which factors influence trust in AI. We identify that a research gap exists regarding the role of personal values in trust in AI. Therefore, this...
conference paper 2021
document
Chen, P.Y. (author), Tielman, M.L. (author), Heylen, Dirk K.J. (author), Jonker, C.M. (author), van Riemsdijk, M.B. (author)
For personal assistive technologies to effectively support users, they need a user model that records information about the user, such as their goals, values, and context. Knowledge-based techniques can model the relationships between these concepts, enabling the support agent to act in accordance with the user's values. However, user models...
conference paper 2023
document
Centeio Jorge, C. (author), Tielman, M.L. (author), Jonker, C.M. (author)
As intelligent agents are becoming human's teammates, not only do humans need to trust intelligent agents, but an intelligent agent should also be able to form artificial trust, i.e. a belief regarding human's trustworthiness. We see artificial trust as the beliefs of competence and willingness, and we study which internal factors (krypta) of...
conference paper 2022
document
Tielman, M.L. (author), Jonker, C.M. (author), van Riemsdijk, M.B. (author)
Personal technology such as electronic partners (e-partners) play an increasing role in our daily lives, and can make an important difference by supporting us in various ways. However, when they offer this support, it is important that they do so with an understanding of our choices and what is important to us. To allow an e-partner to...
conference paper 2019
document
Kola, I. (author), Jonker, C.M. (author), Tielman, M.L. (author), van Riemsdijk, M.B. (author)
Support agents are investigated more and more as a way of assisting people in carrying out daily tasks. Support agents should be flexible in adapting their support to what their user needs. Research suggests that the situation someone is in affects their behaviour, however its effect has not been incorporated in the decision making of support...
conference paper 2020
document
Tielman, M.L. (author), Jonker, C.M. (author), van Riemsdijk, M.B. (author)
conference paper 2018
document
Kola, I. (author), Tielman, M.L. (author), Jonker, C.M. (author), van Riemsdijk, M.B. (author)
Personal assistant agents have been developed to help people in their daily lives with tasks such as agenda management. In order to provide better support, they should not only model the user’s internal aspects, but also their social situation. Current research on social context tackles this by modelling the social aspects of a situation from...
conference paper 2021
Searched for: +
(1 - 9 of 9)