Computational persuasion technologies, explainability, and ethical-legal implications

A systematic literature review

Review (2025)
Author(s)

Davide Calvaresi (University of Applied Sciences and Arts Western Switzerland)

Rachele Carli (Université du Luxembourg)

Simona Tiribelli (Università di Macerata)

Berk Buzcu (University of Applied Sciences and Arts Western Switzerland, Özyeğin University)

Reyhan Aydogan (Özyeğin University, TU Delft - Interactive Intelligence)

Andrea Di Vincenzo (University of Applied Sciences and Arts Western Switzerland)

Yazan Mualla (CIAD-LAB)

Michael Schumacher (University of Applied Sciences and Arts Western Switzerland)

Jean Paul Calbimonte (University of Applied Sciences and Arts Western Switzerland, Sense Innovation and Research Center)

DOI related publication
https://doi.org/10.1016/j.chbr.2024.100577 Final published version
More Info
expand_more
Publication Year
2025
Language
English
Volume number
17
Article number
100577
Downloads counter
421
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This paper conducts a systematic literature review (SLR) to evaluate the effectiveness of computational persuasion technology (CPT) in the eHealth domain. Over the past fifteen years, CPT has been used in various scenarios, from promoting healthy diets to supporting chronic disease management. Despite the proliferation of intelligent systems and Web-based applications, the ethical and legal nuances of these technologies have become increasingly significant. The review follows a structured methodology, assessing 92 primary studies through sixteen research questions covering demographics, application scenarios, user requirements, objectives, functionalities, technologies, advantages, limitations, proposed solutions, ethical and legal implications, and the role of explainable AI (XAI). The findings indicate that while CPT holds promise in inducing behavioral change, many prototypes remain untested on a large scale (60% of surveyed studies only developed at a conceptual level), and long-term effectiveness is still uncertain (36% report attaining their goals, but none focuses on long-term assessment). The study highlights the need for more comparative analyses of persuasion models and tailored approaches to meet diverse user needs. Ethical and legal concerns, such as patient consent, data privacy, and potential for users’ manipulation, are under-explored and require deeper investigation. The paper recommends a bottom-up regulatory approach to create more effective and flexible ethical and legal guidelines for CPT applications.