Explainable AI via SHAP

Bachelor Thesis (2024)
Author(s)

M.E. Pietersma (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

G. Jongbloed – Mentor (TU Delft - Statistics)

N.V. Budko – Graduation committee member (TU Delft - Numerical Analysis)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2024
Language
English
Graduation Date
20-08-2024
Awarding Institution
Delft University of Technology
Programme
Applied Mathematics
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

As machine learning algorithms become increasingly complex, the need for transparent and interpretable models grows more critical. Shapley values, a local explanation method derived from cooperative game theory, is an explanatory method that describes the feature attribution of machine learning models. This is defined by the contribution a feature has to one single prediction. In this thesis the Shapley value formula is detailed, along with its properties. Three different ways to approximate the Shapley values are then introduced: Monte Carlo method for value function, Monte Carlo method through permutations and Kernel SHAP. Implementation of these methods using different datasets reveals that while both methods produce nearly identical Shapley values, Kernel SHAP is significantly faster. This research contributes to the field by demonstrating the advantages of integrating Shapley values into machine learning models, to enhance model transparency and trustworthiness.

Files

License info not available