Transparent and Explainable Agents for Human-Agent Teaming

Doctoral Thesis (2026)
Author(s)

R.S. Verhagen (TU Delft - Interactive Intelligence)

Contributor(s)

M.A. Neerincx – Promotor (TU Delft - Interactive Intelligence)

M.L. Tielman – Copromotor (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
More Info
expand_more
Publication Year
2026
Language
English
Defense Date
02-04-2026
Awarding Institution
Delft University of Technology
Research Group
Interactive Intelligence
ISBN (electronic)
978-94-6518-278-0
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Human-agent teaming in high-stakes domains is already contributing positively to society, yet these AI agents are often still tools directly controlled by humans. Becoming teammates requires agents to be more autonomous and interdependent, two factors that determine what humans need to know about agents. Agent transparency and explanations can provide this necessary knowledge for effective and responsible collaboration. However, we lack an understanding of what agents should disclose and clarify across interdependencies and autonomy levels. Accordingly, this thesis examines how to design transparent and explainable agents that foster effective and responsible human-agent teaming.

We first develop a conceptual framework that distinguishes agent transparency (disclosing information) from explainability (clarifying that information) and relates these concepts to interpretability and understandability, resolving common ambiguities. Using simulation environments, we then demonstrate that interdependence influences how transparency and explanations impact human-agent teaming processes, underscoring its importance in studies on transparent and explainable agents. Next, we examine the trust calibration process across interdependencies. We find first evidence that interdependence relationships influence trust calibration in human-agent teams, suggesting that engaging in joint actions facilitates more accurate trust calibration.

To support responsible human-agent teaming, we develop an evaluation method for meaningful human control based on expert knowledge, operationalizing traceability through objective and subjective indicators and eliciting reasons underlying outcomes. We apply this method to study agent autonomy and explanations in morally sensitive situations. The findings suggest that people prefer more involvement over greater agent autonomy and that they take on greater moral responsibility when agents explain potential consequences. These insights are crucial for designing agents that enhance human moral awareness and human-agent teaming in morally sensitive situations.

Translating these insights to practice, we design TEAMS (Transparent and Explainable Autonomy for Mapping and Searching). This human-robot collaboration system for firefighting moves beyond teleoperation by proposing and explaining intermediate navigation destinations while autonomously navigating towards them. This system is grounded in expert firefighting knowledge and can address the challenge of camera-based teleoperation in low-visibility conditions. We highlight the importance of training, iterative and human-centered refinements, and software optimization to further enhance the system.

Finally, we synthesize a research agenda with taxonomies and guidelines, team design patterns, modular testbeds, and study templates to advance the field. Taken together, this thesis offers a path from concept to practice: a conceptual framework, studies in simulation environments, an evaluation method for meaningful human control, and TEAMS in a practically grounded setting, complemented by a research agenda. By doing so, this thesis supports the design of transparent and explainable AI agents that foster effective and responsible human-agent teaming.

Files

License info not available