Searched for: collection%253Air
(1 - 16 of 16)
document
Centeio Jorge, C. (author), Jonker, C.M. (author), Tielman, M.L. (author)
In teams composed of humans, we use trust in others to make decisions, such as what to do next, who to help and who to ask for help. When a team member is artificial, they should also be able to assess whether a human teammate is trustworthy for a certain task. We see trustworthiness as the combination of (1) whether someone will do a task and ...
journal article 2024
document
Mehrotra, S. (author), Centeio Jorge, C. (author), Jonker, C.M. (author), Tielman, M.L. (author)
Appropriate trust is an important component of the interaction between people and AI systems, in that ‘inappropriate’ trust can cause disuse, misuse or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from their users. Out of the aspects that influence trust, this paper...
journal article 2024
document
Centeio Jorge, C. (author), van Zoelen, E.M. (author), Verhagen, R.S. (author), Mehrotra, S. (author), Jonker, C.M. (author), Tielman, M.L. (author)
As human-machine teams become a more common scenario, we need to ensure mutual trust between humans and machines. More important than having trust, we need all teammates to trust each other appropriately. This means that they should not overtrust or undertrust each other, avoiding risks and inefficiencies, respectively. We usually think of...
book chapter 2024
document
Mehrotra, S. (author), Centeio Jorge, C. (author), Jonker, C.M. (author), Tielman, M.L. (author)
Establishing an appropriate level of trust between people and AI systems is crucial to avoid the misuse, disuse, or abuse of AI. Understanding how AI systems can generate appropriate levels of trust among users is necessary to achieve this goal. This study focuses on the impact of displaying integrity, which is one of the factors that influence...
poster 2023
document
Centeio Jorge, C. (author), Bouman, Nikki H. (author), Jonker, C.M. (author), Tielman, M.L. (author)
Introduction: Collaboration in teams composed of both humans and automation has an interdependent nature, which demands calibrated trust among all the team members. For building suitable autonomous teammates, we need to study how trust and trustworthiness function in such teams. In particular, automation occasionally fails to do its job, which...
journal article 2023
document
Centeio Jorge, C. (author), Jonker, C.M. (author), Tielman, M.L. (author)
Human-AI teams count on both humans and artificial agents to work together collaboratively. In human-human teams, we use trust to make decisions. Similarly, our work explores how an AI can use trust (in human teammates) to make decisions while ensuring the team’s goal and mitigating risks for the humans involved. We present the several steps...
journal article 2023
document
Chen, P.Y. (author), Tielman, M.L. (author), Heylen, Dirk K.J. (author), Jonker, C.M. (author), van Riemsdijk, M.B. (author)
For personal assistive technologies to effectively support users, they need a user model that records information about the user, such as their goals, values, and context. Knowledge-based techniques can model the relationships between these concepts, enabling the support agent to act in accordance with the user's values. However, user models...
conference paper 2023
document
Centeio Jorge, C. (author), Tielman, M.L. (author), Jonker, C.M. (author)
Mutual trust is considered a required coordinating mechanism for achieving effective teamwork in human teams. However, it is still a challenge to implement such mechanisms in teams composed by both humans and AI (human-AI teams), even though those are becoming increasingly prevalent. Agents in such teams should not only be trustworthy and...
conference paper 2022
document
Berka, Jakub (author), Balata, Jan (author), Jonker, C.M. (author), Mikovec, Zdenek (author), van Riemsdijk, M. Birna (author), Tielman, M.L. (author)
Disabled people can benefit greatly from assistive digital technologies. However, this increased human-machine symbiosis makes it important that systems are personalized and transparent to users. Existing work often uses data-oriented approaches. However, these approaches lack transparency and make it hard to influence the system’s behavior. In...
journal article 2022
document
Centeio Jorge, C. (author), Tielman, M.L. (author), Jonker, C.M. (author)
As intelligent agents are becoming human's teammates, not only do humans need to trust intelligent agents, but an intelligent agent should also be able to form artificial trust, i.e. a belief regarding human's trustworthiness. We see artificial trust as the beliefs of competence and willingness, and we study which internal factors (krypta) of...
conference paper 2022
document
Mehrotra, S. (author), Jonker, C.M. (author), Tielman, M.L. (author)
As AI systems are increasingly involved in decision making, it also becomes important that they elicit appropriate levels of trust from their users. To achieve this, it is first important to understand which factors influence trust in AI. We identify that a research gap exists regarding the role of personal values in trust in AI. Therefore, this...
conference paper 2021
document
Centeio Jorge, C. (author), Mehrotra, S. (author), Tielman, M.L. (author), Jonker, C.M. (author)
In human-agent teams, how one teammate trusts another teammate should correspond to the latter's actual trustworthiness, creating what we would call appropriate mutual trust. Although this sounds obvious, the notion of appropriate mutual trust for human-agent teamwork lacks a formal definition. In this article, we propose a formalization which...
conference paper 2021
document
Kola, I. (author), Tielman, M.L. (author), Jonker, C.M. (author), van Riemsdijk, M.B. (author)
Personal assistant agents have been developed to help people in their daily lives with tasks such as agenda management. In order to provide better support, they should not only model the user’s internal aspects, but also their social situation. Current research on social context tackles this by modelling the social aspects of a situation from...
conference paper 2021
document
Kola, I. (author), Jonker, C.M. (author), Tielman, M.L. (author), van Riemsdijk, M.B. (author)
Support agents are investigated more and more as a way of assisting people in carrying out daily tasks. Support agents should be flexible in adapting their support to what their user needs. Research suggests that the situation someone is in affects their behaviour, however its effect has not been incorporated in the decision making of support...
conference paper 2020
document
Tielman, M.L. (author), Jonker, C.M. (author), van Riemsdijk, M.B. (author)
Personal technology such as electronic partners (e-partners) play an increasing role in our daily lives, and can make an important difference by supporting us in various ways. However, when they offer this support, it is important that they do so with an understanding of our choices and what is important to us. To allow an e-partner to...
conference paper 2019
document
Tielman, M.L. (author), Jonker, C.M. (author), van Riemsdijk, M.B. (author)
conference paper 2018
Searched for: collection%253Air
(1 - 16 of 16)