Interpretable Deep Visual Place Recognition

Master Thesis (2018)
Author(s)

X. Shi (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Jan Van Gemert – Mentor

Seyran Khademi – Mentor

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2018 Xiangwei Shi
More Info
expand_more
Publication Year
2018
Language
English
Copyright
© 2018 Xiangwei Shi
Graduation Date
31-08-2018
Awarding Institution
Delft University of Technology
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We propose a framework to interpret deep convolutional models for visual place classification. Given a deep place classification model, our proposed method produces visual explanations and saliency maps that reveal the understanding of images by the model. To evaluate the interpretability, t-SNE algorithm is used for mapping and visualization of these latent visual explanations. Moreover, we use pre-trained semantic segmentation networks to label all objects appearing in the visual explanations for our discriminative models. This work has two main goals. The first one is to investigate the consistency of visual explanations by different models. The second goal is to investigate whether visual explanations are meaningful and interpretable or not in an unsupervised manner. We find that varying the CNN architecture leads to variations in the discriminative visual explanations, but these visual explanations are interpretable.

Files

License info not available