In multi-member human-agent teams the communication and shared mental models within the team are essential for good teamwork and team performance. In some ways the mediating processes are even more important than in human-only team because the artificial agents of today lack many
...
In multi-member human-agent teams the communication and shared mental models within the team are essential for good teamwork and team performance. In some ways the mediating processes are even more important than in human-only team because the artificial agents of today lack many of the innate social behaviours that humans naturally possess. Research into human-agent teams have allowed designers of such teams to anticipate for complex interactions such as trust violation and repair scenarios. In this study a human-agent-agent team undertakes a search- and rescue mission with the human in a leading role, one of the agents free-roaming and the other agent under the human's direct control. Approximately one-third of the way through the mission, the autonomous agents initiated actions independently of human approval, thereby undermining operator trust. As a trust repair strategy the agent employs a promise to do better and a novel authority change by lowering its level of automation and presenting the option of restricting cooperation with the other agent.
We conducted the experiment with thirty participants divided into a two groups with differing trust repair strategies (promise only, promise with the authority change) and measured trust perception at three different time steps.
Results show no significant difference between the two trust repair strategies when directly comparing to trust. A positive correlation between the authority change trust repair strategy and task load on trust recovery was found. Through thematic analysis we did find that the shared mental model and communication richness to be dissonant to what participants expected which is in line with literature on the complexity of triadic teams.