How Does Predictive Uncertainty Quantification Correlate with the Plausibility of Counterfactual Explanations

Bachelor Thesis (2024)
Author(s)

D. Nikolov (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Patrick Altmeyer – Mentor (TU Delft - Multimedia Computing)

C.C.S. Liem – Mentor (TU Delft - Multimedia Computing)

B. Dudzik – Graduation committee member (TU Delft - Pattern Recognition and Bioinformatics)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2024
Language
English
Graduation Date
27-06-2024
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Counterfactual explanations can be applied to algorithmic recourse, which is concerned with helping individuals in the real world overturn undesirable algorithmic decisions. They aim to provide explanations to opaque machine learning models. Not all generated points are equally faithful to the model, nor equally plausible. On the other hand, predictive uncertainty quantification is used to measure the degree of certainty a model has in its predictions. Previously, it has been shown that it is possible to generate more plausible counterfactual explanations utilising predictive uncertainty. This work investigates this further by using multiple models innately supporting uncertainty quantification and comparing the produced counterfactual explanations to those produced by the models' ordinary counter-part. Predictive uncertainty tends to enhance the plausibility of the counterfactuals on visual datasets. Furthermore, we are positive that predictive uncertainty correlates proportionally with plausibility. This correlation has important implications for both research and real-world applications, as it suggests that integrating uncertainty quantification in model development can improve the quality and trustworthiness of algorithmic explanations.

Files

License info not available