Explaining Robot Behaviour

Beliefs, Desires, and Emotions in Explanations of Robot Action

More Info
expand_more

Abstract

Social humanoid robots are complex intelligent systems that in the near future
will operate in domains including healthcare and education. Transparency of what robots intend during interaction is important. This helps the users trust them and increases a user’s motivation for, e.g., behaviour change (health) or learning (education). Trust and motivation for treatment are of particular importance in these consequential domains, i.e., domains where the consequences of misuse of the system are significant. For example, rejecting treatment can have a negative impact on the user’s health. Transparency can be enhanced by having the robot explain its behaviour to its users (i.e., when the robot provides self-explanations). Selfexplanations help the user to assess to what extent he or she should trust the decision or action of the system.