LEAFAGE

Example-based and Feature importance-based Explanations for Black-box ML models

More Info
expand_more

Abstract

Explainable Artificial Intelligence (XAI) is an emergent research field which tries to cope with the lack of transparency of AI systems, by providing human understandable explanations for the underlying Machine Learning models. This work presents a new explanation extraction method called LEAFAGE. Explanations are provided both in terms of feature importance and of similar classification examples. The latter is a well known strategy for problem solving and justification in social science. LEAFAGE leverages on the fact that the reasoning behind a single decision/prediction for a single data point is generally simpler to understand than the complete model; it produces explanations by generating simpler yet locally accurate approximations of the original model. LEAFAGE performs overall better than the current state of the art in terms of fidelity of the model approximation, in particular when Machine Learning models with non-linear decision boundaries are analysed. LEAFAGE was also tested in terms of usefulness for the user, an aspect still largely overlooked in the scientific literature. Results show interesting and partly counter-intuitive findings, such as the fact that providing no explanation is sometimes better than providing certain kinds of explanation.

Files