Circular Image

M.L. Tielman

57 records found

Effective support from personal assistive technologies relies on accurate user models that capture user values, preferences, and context. Knowledge-based techniques model these relationships, enabling support agents to align their actions with user values. However, understanding ...
As intelligent systems become more integrated into people’s daily life, systems designed to facilitate lifestyle and behavior change for health and well-being have also become more common. Previous work has identified challenges in the development and deployment of such AI-based ...

Advancing Human-Machine Teaming

Definitions, Challenges, Future Directions

Humans and intelligent machines increasingly collaborate on complex tasks, although significant challenges remain before machines can function as effective teammates. The human-machine teaming research community attempts to address these challenges by developing and testing metho ...

Agent Allocation of Moral Decisions in Human-Agent Teams

Raise Human Involvement and Explain Potential Consequences

Humans and artificial intelligence agents increasingly collaborate in morally sensitive situations such as firefighting. These agents can often perform tasks with minimal human control, challenging accountability and responsibility. Combining higher agent autonomy levels with mea ...

Social AI for a Healthier Lifestyle

Four Competencies to Manage and Prevent Chronic Diseases

Lifestyle-related diseases like type 2 diabetes mellitus (T2DM) and chronic obstructive pulmonary disease (COPD), have a major impact on society, asking for comprehensive disease management support. While AI technology has advanced for diagnosis and disease detection, its impleme ...
Mutual trust between humans and interactive artificial agents is crucial for effective human-agent teamwork. This involves not only the human appropriately trusting the artificial teammate, but also the artificial teammate assessing the human’s trustworthiness for different tasks ...

Interdependence and trust analysis (ITA)

A framework for human–machine team design

As machines' autonomy increases, the possibilities for collaboration between a human and a machine also increase. In particular, tasks may be performed with varying levels of interdependence, i.e. from independent to joint actions. The feasibility of each type of interdependence ...
Appropriate trust, trust which aligns with system trustworthiness, in Artificial Intelligence (AI) systems has become an important area of research. However, there remains debate in the community about how to design for appropriate trust. This debate is a result of the complex na ...
Agent-based training systems can enhance people's social skills. The effective development of these systems needs a comprehensive architecture that outlines their components and relationships. Such an architecture can pinpoint improvement areas and future outlooks. This paper pre ...
Child helplines offer a safe and private space for children to share their thoughts and feelings with volunteers. However, training these volunteers to help can be both expensive and time-consuming. In this demo, we present Lilobot, a conversational agent designed to train volunt ...
Introduction: Humans and robots are increasingly collaborating on complex tasks such as firefighting. As robots are becoming more autonomous, collaboration in human-robot teams should be combined with meaningful human control. Variable autonomy approaches can ensure meaningful hu ...
This paper explores the potential of conversational intermediary AI (CIAI) between patients and healthcare providers, focusing specifically on promoting healthier lifestyles for Type 2 diabetes. CIAI aims to address the constraint of limited healthcare provider time by acting as ...
As machines' autonomy increases, their capacity to learn and adapt to humans in collaborative scenarios increases too. In particular, machines can use artificial trust (AT) to make decisions, such as task and role allocation/selection. However, the outcome of such decisions and t ...
Appropriate trust is an important component of the interaction between people and AI systems, in that "inappropriate"trust can cause disuse, misuse, or abuse of AI. To foster appropriate trust in AI, we need to understand how AI systems can elicit appropriate levels of trust from ...
In human-machine teams, the strengths and weaknesses of both team members result in dependencies, opportunities, and requirements to collaborate. Managing these interdependence relationships is crucial for teamwork, as it is argued that they facilitate accurate trust calibration. ...
As human-machine teams become a more common scenario, we need to ensure mutual trust between humans and machines. More important than having trust, we need all teammates to trust each other appropriately. This means that they should not overtrust or undertrust each other, avoidin ...
Appropriate Trust in Artificial Intelligence (AI) systems has rapidly become an important area of focus for both researchers and practitioners. Various approaches have been used to achieve it, such as confidence scores, explanations, trustworthiness cues, or uncertainty communica ...
In teams composed of humans, we use trust in others to make decisions, such as what to do next, who to help and who to ask for help. When a team member is artificial, they should also be able to assess whether a human teammate is trustworthy for a certain task. We see trustworthi ...