As robots and virtual agents are increasingly envisioned as long-term companions rather than simply tools, it becomes essential to ensure that human–robot relationships are grounded in appropriate forms of trust. This study investigates how cognitive and affective dimensions of t
...
As robots and virtual agents are increasingly envisioned as long-term companions rather than simply tools, it becomes essential to ensure that human–robot relationships are grounded in appropriate forms of trust. This study investigates how cognitive and affective dimensions of trust develop differently over time in social human–robot interaction. We conducted a 2 (social attitude: social, baseline) × 3 (time: t1, t2, t3) mixed-design user study using a novel, card-based conversational task designed to encourage trust formation. Results show that while cognitive trust remained stable over time, affective trust increased gradually across repeated interactions. Moreover, social cues enhance both cognitive and affective trust. These findings provide empirical support for the theoretical distinction between cognitive and affective trust, offering new evidence that affective trust develops more slowly, consistent with interpersonal trust theories.