Explaining Robot Behaviour

Beliefs, Desires, and Emotions in Explanations of Robot Action

Doctoral Thesis (2020)
Author(s)

Frank Kaptein (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
Copyright
© 2020 F.C.A. Kaptein
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 F.C.A. Kaptein
Research Group
Interactive Intelligence
ISBN (print)
978-94-6423-040-6
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Social humanoid robots are complex intelligent systems that in the near future
will operate in domains including healthcare and education. Transparency of what robots intend during interaction is important. This helps the users trust them and increases a user’s motivation for, e.g., behaviour change (health) or learning (education). Trust and motivation for treatment are of particular importance in these consequential domains, i.e., domains where the consequences of misuse of the system are significant. For example, rejecting treatment can have a negative impact on the user’s health. Transparency can be enhanced by having the robot explain its behaviour to its users (i.e., when the robot provides self-explanations). Selfexplanations help the user to assess to what extent he or she should trust the decision or action of the system.

Files

License info not available
License info not available