LLM-Based Evaluation Methodology of Explanation Strategies

Conference Paper (2026)
Author(s)

Ege Soyarar (Özyeğin University)

Reyhan Aydogan (TU Delft - Interactive Intelligence, Özyeğin University)

Berk Buzcu (University of Applied Sciences and Arts Western Switzerland)

Davide Calvaresi (University of Applied Sciences and Arts Western Switzerland)

Research Group
Interactive Intelligence
DOI related publication
https://doi.org/10.1007/978-3-032-01399-6_6
More Info
expand_more
Publication Year
2026
Language
English
Research Group
Interactive Intelligence
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository as part of the Taverne amendment. More information about this copyright law amendment can be found at https://www.openaccess.nl. Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Pages (from-to)
85-103
Publisher
Springer
ISBN (print)
9783032013989
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

As data privacy regulations, such as the EU AI Act and EU Data Act, become increasingly stringent, processing real user data for AI models like movie recommendation systems has grown more challenging. Moreover, the human-centric data collection and evaluation of Explainable AI (XAI) systems are often costly and time-consuming; making it hard to sustain. Hence, this study adopts the Synthetic Behavior Generation (SBG) approach, leveraging large language models (LLMs) to evaluate AI explanations while ensuring compliance with regulations and providing cost-effective solutions for human feedback. To assess the quality of these explanations, we utilize three different LLMs, which are fed synthetically generated user behaviors to evaluate explanations of an AI system as if they were real users. The evaluation focuses on key criteria such as convincingness, clarity, accuracy, and the impact on decision-making, facilitating a thorough assessment of explanation effectiveness. The results indicated that LLMs can deliver structured and consistent evaluations based on the provided synthetic user behavior.

Files

978-3-032-01399-6_6.pdf
(pdf | 1.37 Mb)
Taverne
warning

File under embargo until 13-04-2026