Natural Language Counterfactual Explanations in Financial Text Classification

Conference Paper (2025)
Author(s)

Karol Dobiczek (Student TU Delft)

P. Altmeyer (TU Delft - Multimedia Computing)

C.C.S. Liem (TU Delft - Multimedia Computing)

Research Group
Multimedia Computing
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Multimedia Computing
Pages (from-to)
958–972
ISBN (electronic)
979-8-89176-261-9
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The use of large language model (LLM) classifiers in finance and other high-stakes domains calls for a high level of trustworthiness and explainability. We focus on counterfactual explanations (CE), a form of explainable AI that explains a model’s output by proposing an alternative to the original input that changes the classification. We use three types of CE generators for LLM classifiers and assess the quality of their explanations on a recent dataset consisting of central bank communications. We compare the generators using a selection of quantitative and qualitative metrics. Our findings suggest that non-expert and expert evaluators prefer CE methods that apply minimal changes; however, the methods we analyze might not handle the domain-specific vocabulary well enough to generate plausible explanations. We discuss shortcomings in the choice of evaluation metrics in the literature on text CE generators and propose refined definitions of the fluency and plausibility qualitative metrics.

Files

2025.gem-1.75.pdf
(pdf | 0.414 Mb)
- Embargo expired in 20-02-2025
License info not available