Comparing Trust Development in Human and Robot Collaboration

Master Thesis (2026)
Author(s)

C.T. Guo (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

M.L. Tielman – Mentor (TU Delft - Interactive Intelligence)

M.A. Neerincx – Mentor (TU Delft - Interactive Intelligence)

A. Anand – Mentor (TU Delft - Web Information Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2026
Language
English
Graduation Date
24-02-2026
Awarding Institution
Delft University of Technology
Programme
['Computer Science']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

As robots increasingly transition from automated tools to collaborative teammates, trust becomes a central requirement for effective human–robot collaboration. While prior research has examined trust in human–robot interaction, little is known about how trust dynamically unfolds across its full trajectory of formation, violation, and repair, particularly in comparison to human–human collaboration and within physically co-present settings. This thesis investigates how a teammate’s identity, human versus robot, shapes the development of interpersonal trust over time.

A controlled laboratory study was conducted in which participants collaborated with either a human confederate or an anthropomorphic robot teammate on a cooperative building task requiring high interdependence. Trust was measured across three phases: initial collaboration (trust formation), a competence-based mistake (trust violation), and a subsequent repair attempt involving an apology, explanation, and promise (trust recovery). Trust was measured using trust questionnaires capturing trusting beliefs and trusting intentions. Data were analyzed using a Bayesian multilevel modeling approach to account for repeated measures and individual differences.

The results show that participants initially reported lower trust toward the robot than toward the human teammate. Contrary to expectations based on the perfect automation schema, trust declined more sharply following a mistake by the human than by the robot. During the recovery phase, trust rebounded in both conditions. Trust toward the robot recovered to its initial level, while trust toward the human did not fully return to baseline.

Analyses across trust dimensions further revealed that benevolence perceptions toward the robot improved over time, narrowing the initial gap between human and robot teammates. Competence perceptions showed similar violation and recovery patterns across conditions. In contrast, trusting intentions showed a more uneven pattern: although willingness to rely on the robot seemingly returned to its own baseline during recovery, the human–robot difference widened again at \(t_3\), suggesting that reliance remained more sensitive to teammate identity even as other trust dimensions converged.

Overall, this study demonstrates that trust toward human and robot teammates follows similar formation, violation, and recovery phases, but differs in how changes are anchored to initial expectations and distributed across trust dimensions. Specifically, participants began with lower trust in the robot, yet a human teammate’s mistake produced a sharper drop and less complete return to baseline than a comparable robot mistake. While trust toward the robot increased relative to its own baseline, particularly through benevolence, willingness to rely remained more differentiated by teammate identity. These findings show that aggregated trust scores can mask dimension-specific dynamics and that recovery in trust beliefs does not necessarily translate into equivalent recovery in trusting intentions. Practically, this suggests that designing for effective human–robot teamwork requires addressing not only how robots regain positive evaluations after errors, but also how to support users’ willingness to rely on them in interdependent tasks.

Files

Msc_Thesis_Ching_Guo.pdf
(pdf | 20.2 Mb)
License info not available