Towards Trust in Human-AI Teams

More Info
expand_more

Abstract

Human-AI teams require trust to operate efficiently and solve certain tasks like search & rescue. Trustworthiness is measured using the ABI model; Ability, Benevolence and Integrity. This research paper tries to observe the effect a conflicting robot has on the human trustworthiness. The hypothesis we try to test is: “human trustworthiness will decrease when paired with a conflicting AI”. We conduct an experiment with one control group playing with a normal agent and an experiment group paired with the conflicting agent. Using the ABI concepts, we model the human trustworthiness across both groups using in-game observations (objective) and a questionnaire (subjective). When comparing the results from both group we see that the conflicting agent does not decrease the objective trustworthiness, however looking at the questionnaires we observe that the subjective human benevolence and integrity are negatively affected when paired with the conflicting agent.