How do different explanation presentation strategies of feature and data attribution techniques affect non-expert understanding?

Explaining Deep Learning models for Fact-Checking

Bachelor Thesis (2023)
Author(s)

S. Singh (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Avishek Anand – Mentor (TU Delft - Web Information Systems)

L. Lyu – Mentor (TU Delft - Web Information Systems)

L. Corti – Mentor (TU Delft - Web Information Systems)

Marco Loog – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2023
Language
English
Graduation Date
03-02-2023
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The goal of this paper is to examine how different presentation strategies of Explanainable Artificial Intelligence (XAI) explanation methods for textual data affect non-expert understanding in the context of fact-checking. The importance of understand- ing the decision of an Artificial Intelligence (AI) in human-AI interaction and the need for effective explanation methods to improve trust in AI models are highlighted. The study focuses on three expla- nation methods: interpretable-by-design model Ex- Pred, feature attribution technique LIME, and in- stance attribution method k-NN. Two presentation strategies were compared for each method, and par- ticipants were presented with a set of claims and asked to indicate their understanding and level of agreement with the AI’s classification. The main hypothesis is that participants will appreciate all available context and details, as long as it is pre- sented in a structured way, and will find visual rep- resentations of data easier to understand than tex- tual ones. Results from the study indicate that par- ticipants prefer explanations that are simple and structured, and that visual presentations are not as effective, especially when it is the first time a user interacts with this type of data. Additionally, it was found that better formatting leads to a better- calibrated understanding of the explanation. The results of this study will provide valuable insight into the best way to present XAI explanations to non-experts to enhance their understanding and re- duce the deployment risk associated with Natural Language Processing (NLP) models for automated fact-checking. The study’s code, data, and Figma templates are publicly available for reproducibility.

Files

BEPShivani.pdf
(pdf | 1.64 Mb)
License info not available