Human trustworthiness when collaborating with a friendly agent

Final paper

More Info
expand_more

Abstract

As technology advances, automated systems become more autonomous which leads to a higher interdependence between machine and human. Much research has been done about trust between humans and trust of humans regarding machines. An interesting question that remains is how the behavior of an agent influences human trustworthiness in a human-agent collaborative setting. The research presented by this paper contributes to the understanding of this area. It investigates a specific behavioral trait using the following hypothesis: friendly behavior of an agent improves human trustworthiness. Here, trustworthiness is broken up in the constructs: ability, benevolence and integrity.

An experiment has been conducted using a collaborative Search and Rescue game. The following behaviors of the participants have been measured:
- Ability: speed and effectiveness;
- Benevolence: communication, willingness to help, agreeableness to advice, responsiveness;
- Integrity: truthfulness.
Furthermore, a likert scale has been used to measure the participants' own perception of their trustworthiness. The experiment is conducted with 20 participants in the control group, where the agent spoke in a neutral manner, and 20 in the experimental group, where the agent instilled empathy, stimulated collaboration, encouraged the participants and was affectionate.

The research has shown a significant improvement in the experimental group only for communication and willingness to help. This gives some indication that a friendly agent only slightly improves the trustworthiness of a human. However, the research has some limitations that might also explain the lack of significant results. Firstly, it is unclear to what extent the measures truly measured the constructs of trustworthiness. Secondly, to create a friendly agent, theories from organizational and social psychology are used, which are mostly focussed on human-human relationships, instead of human-agent relationships. Finally, Some confounding variables may have had an impact, like lag in the game and the participant not properly reading the agent’s messages.