The effects of an agent asking for help on a human's trustworthiness
More Info
expand_more
Abstract
AI systems have the ability to complete tasks with greater precision and speed than humans, which has led to an increase in their usage. These systems are often grouped with humans in order to take advantage of the unique abilities of both the AI and the human. However, to make this cooperation as efficient as possible, there needs to be a mutual trust between humans and AIs. While there has been much research concerning the topic of human trust, there is a lack of work done concerning the trust that an artificial agent has toward its human partner. Given that a human must appear trustworthy to an artificial agent in order for that agent to trust him, and that demanding and offering help are important parts of a collaboration, the following research question has been formulated : How does an artificial agent asking a human for advice or help affect that human's trustworthiness ? To answer this research question, an experiment was conducted through an urban search and rescue game using the MATRX Software. Through this game, participants had to collaborate with a robot partner in order to accomplish the task of finding and rescuing 8 victims and delivering them to the correct drop zone. The participants were divided into a control group, who worked alongside a basic rescue robot, and an experimental group, which had a help-seeker robot as a partner. The help-seeker robot differed from the basic robot in its ability to ask the participant for advice, such as asking which room it should search for victims to rescue. Following the experiment, no significant results indicating a positive or negative effect on human trustworthiness by the help-seeker agent's behaviour were found.