Psychological, economic, and ethical factors in human feedback for a chatbot-based smoking cessation intervention

Journal Article (2025)
Author(s)

N. Albers (TU Delft - Interactive Intelligence)

FS Melo (Instituto Superior Técnico (IST), Universidade de Lisboa)

Mark Neerincx (TU Delft - Interactive Intelligence)

Olya Kudina (TU Delft - Ethics & Philosophy of Technology)

Willem-Paul Brinkman (TU Delft - Interactive Intelligence)

Research Group
Interactive Intelligence
DOI related publication
https://doi.org/10.1038/s41746-025-01701-3
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Interactive Intelligence
Issue number
1
Volume number
8
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Integrating human support with chatbot-based behavior change interventions raises three challenges: (1) attuning the support to an individual’s state (e.g., motivation) for enhanced engagement, (2) limiting the use of the concerning human resources for enhanced efficiency, and (3) optimizing outcomes on ethical aspects (e.g., fairness). Therefore, we conducted a study in which 679 smokers and vapers had a 20% chance of receiving human feedback between five chatbot sessions. We find that having received feedback increases retention and effort spent on preparatory activities. However, analyzing a reinforcement learning (RL) model fit on the data shows there are also states where not providing feedback is better. Even this “standard” benefit-maximizing RL model is value-laden. It not only prioritizes people who would benefit most, but also those who are already doing well and want feedback. We show how four other ethical principles can be incorporated to favor other smoker subgroups, yet, interdependencies exist.