Assisting agent’s effect on the trustworthiness of their human teammate

More Info
expand_more

Abstract

Collaborative AI (CAI) is a fast growing field of study. Cooperation between teams composed of humans and artificial intelligence needs to be principled and founded on reciprocal trust. Modelling the trustworthiness of humans is a difficult task because of the ambiguous nature of its definition as well as the effect of team work dynamic. This research defines and measures human trustworthiness within the context of human-AI collaboration and tests how the artificial intelligence agent’s action of offering help plays a role in it. The experiment is conducted through the MATRX framework, in which a human agent and an AI agent collaborate in order to search and rescue victims within the environment. The ABI trust model is used to determine the sub-components that define trustworthiness, which is ability, benevolence and integrity. Trustworthiness is measured in 2 ways, through objective measures which represent the AI agent’s measure of the human trustworthiness and subjective measures that represent the human’s measure of their own trustworthiness. The results show that help offered by the AI agent, improves the ability and benevolence of the human agent in objective metrics but not integrity. Subjective results show no statistically significant change. The research concludes that the trustworthiness perceived by the AI agent is indeed improved, but does not provide evidence of the same from the perception of the human agent.