Evaluating Cognitive and Affective Intelligent Agent Explanations in a Long-Term Health-Support Application for Children with Type 1 Diabetes

Conference Paper (2019)
Authors

Frank Kaptein (TU Delft - Interactive Intelligence)

Joost Broekens (Universiteit Leiden)

Koen V. Hindriks (Vrije Universiteit Amsterdam)

Mark Neerincx (TU Delft - Interactive Intelligence, TNO)

Research Group
Interactive Intelligence
To reference this document use:
https://doi.org/10.1109/ACII.2019.8925526
More Info
expand_more
Publication Year
2019
Language
English
Research Group
Interactive Intelligence
Pages (from-to)
304-310
ISBN (electronic)
9781728138886
DOI:
https://doi.org/10.1109/ACII.2019.8925526

Abstract

Explanation of actions is important for transparency of-, and trust in the decisions of smart systems. Literature suggests that emotions and emotion words-in addition to beliefs and goals-are used in human explanations of behaviour. Furthermore, research in e-health support systems and human-robot interaction stresses the need for studying long-term interaction with users. However, state of the art explainable artificial intelligence for intelligent agents focuses mainly on explaining an agent's behaviour based on the underlying beliefs and goals in short-term experiments. In this paper, we report on a long-term experiment in which we tested the effect of cognitive, affective and lack of explanations on children's motivation to use an e-health support system. Children (aged 6-14) suffering from type 1 diabetes mellitus interacted with a virtual robot as part of the e-health system over a period of 2.5-3 months. Children alternated between the three conditions. Agent behaviours that were explained to the children included why 1) the agent asks a certain quiz question; 2) the agent provides a specific tip (a short instruction) about diabetes; or, 3) the agent provides a task suggestion, e.g., play a quiz, or, watch a video about diabetes. Their motivation was measured by counting how often children would follow the agent's suggestion, how often they would continue to play the quiz or ask for an additional tip, and how often they would request an explanation from the system. Surprisingly, children proved to follow task suggestions more often when no explanation was given, while other explanation effects did not appear. This is to our knowledge the first longterm study to report empirical evidence for an agent explanation effect, challenging the next studies to uncover the underlying mechanism.

No files available

Metadata only record. There are no files for this record.