What do You Mean? Interpreting Image Classification with Crowdsourced Concept Extraction and Analysis

Conference Paper (2021)
Author(s)

Agathe Balayn (TU Delft - Web Information Systems)

Panagiotis Soilis

Christoph Lofi (TU Delft - Web Information Systems)

Jie Yang (TU Delft - Web Information Systems)

Alessandro Bozzon (TU Delft - Human-Centred Artificial Intelligence)

Research Group
Web Information Systems
DOI related publication
https://doi.org/10.1145/3442381.3450069
More Info
expand_more
Publication Year
2021
Language
English
Research Group
Web Information Systems
Pages (from-to)
1937-1948
ISBN (print)
978-1-4503-8312-7
ISBN (electronic)
9781450383127
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Global interpretability is a vital requirement for image classification applications. Existing interpretability methods mainly explain a model behavior by identifying salient image patches, which require manual efforts from users to make sense of, and also do not typically support model validation with questions that investigate multiple visual concepts. In this paper, we introduce a scalable human-in-the-loop approach for global interpretability. Salient image areas identified by local interpretability methods are annotated with semantic concepts, which are then aggregated into a tabular representation of images to facilitate automatic statistical analysis of model behavior. We show that this approach answers interpretability needs for both model validation and exploration, and provides semantically more diverse, informative, and relevant explanations while still allowing for scalable and cost-efficient execution.