A Survey On Convolutional Neural Network Explainability Methods

Student Report (2019)
Authors

N.H. Bouman (TU Delft - Electrical Engineering, Mathematics and Computer Science)

V.A. Jaggi (TU Delft - Electrical Engineering, Mathematics and Computer Science)

M. Khattat (TU Delft - Electrical Engineering, Mathematics and Computer Science)

N. Salami (TU Delft - Electrical Engineering, Mathematics and Computer Science)

V.G.A. Wernet (TU Delft - Electrical Engineering, Mathematics and Computer Science)

W.R. Zonneveld (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Supervisors

DMJ Tax (TU Delft - Pattern Recognition and Bioinformatics)

Faculty
Electrical Engineering, Mathematics and Computer Science, Electrical Engineering, Mathematics and Computer Science
Copyright
© 2019 Nikki Bouman, Vanisha Jaggi, Mostafa Khattat, Nima Salami, Victor Wernet, Wouter Zonneveld
More Info
expand_more
Publication Year
2019
Language
English
Copyright
© 2019 Nikki Bouman, Vanisha Jaggi, Mostafa Khattat, Nima Salami, Victor Wernet, Wouter Zonneveld
Graduation Date
30-10-2019
Awarding Institution
Delft University of Technology
Project
Bachelor Seminar
Programme
Computer Science and Engineering
Faculty
Electrical Engineering, Mathematics and Computer Science, Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Artificial Intelligence (AI) is increasingly affecting people’s lives. AI is even employed in fields where human lives depend on the AI’s decisions. However, these algorithms lack transparency, i.e. it is unclear how they determine the outcome. If, for instance, the AI’s purpose is to classify an image, the AI will learn this from examples provided to it (e.g. an image of a cow in a meadow). The algorithm can focus on the wrong part of the image. Instead of focusing on the foreground (cow), it could focus on the background (meadow). This way, by focusing on the background, it could produce a false output (e.g. a horse instead of a cow). To show this, an explanation is needed. For this reason, a variety of methods have been created to explain the reasoning behind these algorithms, called explainability methods. In this paper, six local explainability methods are discussed and compared. These methods were chosen as they were the most prominently used approaches for explainability methods for Convolutional Neural Networks (CNN). By comparing methods with analogous characteristics, this paper is going to show what methods exceed others in terms of performance. Furthermore, their advantages and limitations are being discussed. The comparison shows that Local Interpretable Model-agnostic Explanations, Layer-wise Relevance Propagation and Gradient-weighted Class Activation Mapping perform better than Sensitivity Analysis, Deep Taylor Decomposition and Deconvolutional Network, respectively.

Files

License info not available