M.L. Tielman
58 records found
1
Mutual trust between humans and interactive artificial agents is crucial for effective human-agent teamwork. This involves not only the human appropriately trusting the artificial teammate, but also the artificial teammate assessing the human’s trustworthiness for different tasks
...
Effective support from personal assistive technologies relies on accurate user models that capture user values, preferences, and context. Knowledge-based techniques model these relationships, enabling support agents to align their actions with user values. However, understanding
...
"even explanations will not help in trusting [this] fundamentally biased system"
A Predictive Policing Case-Study
In today's society, where Artificial Intelligence (AI) has gained a vital role, concerns regarding user's trust have garnered significant attention. The use of AI systems in high-risk domains have often led users to either under-trust it, potentially causing inadequate reliance o
...
Agent Allocation of Moral Decisions in Human-Agent Teams
Raise Human Involvement and Explain Potential Consequences
Humans and artificial intelligence agents increasingly collaborate in morally sensitive situations such as firefighting. These agents can often perform tasks with minimal human control, challenging accountability and responsibility. Combining higher agent autonomy levels with mea
...
Advancing Human-Machine Teaming
Definitions, Challenges, Future Directions
Humans and intelligent machines increasingly collaborate on complex tasks, although significant challenges remain before machines can function as effective teammates. The human-machine teaming research community attempts to address these challenges by developing and testing metho
...
Social AI for a Healthier Lifestyle
Four Competencies to Manage and Prevent Chronic Diseases
Lifestyle-related diseases like type 2 diabetes mellitus (T2DM) and chronic obstructive pulmonary disease (COPD), have a major impact on society, asking for comprehensive disease management support. While AI technology has advanced for diagnosis and disease detection, its impleme
...
As intelligent systems become more integrated into people’s daily life, systems designed to facilitate lifestyle and behavior change for health and well-being have also become more common. Previous work has identified challenges in the development and deployment of such AI-based
...
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners. Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communica
...
This paper explores the potential of conversational intermediary AI (CIAI) between patients and healthcare providers, focusing specifically on promoting healthier lifestyles for Type 2 diabetes. CIAI aims to address the constraint of limited healthcare provider time by acting as
...
Appropriate trust is an important component of the interaction between people and AI systems, in that "inappropriate"trust can cause disuse, misuse, or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from
...
As human-machine teams become a more common scenario, we need to ensure mutual trust between humans and machines. More important than having trust, we need all teammates to trust each other appropriately. This means that they should not overtrust or undertrust each other, avoidin
...
Explainable AI for All
A Roadmap for Inclusive XAI for people with Cognitive Disabilities
Artificial intelligence (AI) is increasingly prevalent in our daily lives, setting specific requirements for responsible development and deployment: The AI should be explainable and inclusive. Despite substantial research and development investment in explainable AI, there is a l
...
Appropriate trust, trust which aligns with system trustworthiness, in Artificial Intelligence (AI) systems has become an important area of research. However, there remains debate in the community about how to design for appropriate trust. This debate is a result of the complex na
...
As machines' autonomy increases, their capacity to learn and adapt to humans in collaborative scenarios increases too. In particular, machines can use artificial trust (AT) to make decisions, such as task and role allocation/selection. However, the outcome of such decisions and t
...
Introduction: Humans and robots are increasingly collaborating on complex tasks such as firefighting. As robots are becoming more autonomous, collaboration in human-robot teams should be combined with meaningful human control. Variable autonomy approaches can ensure meaningful hu
...
Agent-based training systems can enhance people's social skills. The effective development of these systems needs a comprehensive architecture that outlines their components and relationships. Such an architecture can pinpoint improvement areas and future outlooks. This paper pre
...
Interdependence and trust analysis (ITA)
A framework for human–machine team design
As machines' autonomy increases, the possibilities for collaboration between a human and a machine also increase. In particular, tasks may be performed with varying levels of interdependence, i.e. from independent to joint actions. The feasibility of each type of interdependence
...
Child helplines offer a safe and private space for children to share their thoughts and feelings with volunteers. However, training these volunteers to help can be both expensive and time-consuming. In this demo, we present Lilobot, a conversational agent designed to train volunt
...