How can Explainability Methods be Used to Support Bug Identification in Computer Vision Models?

Conference Paper (2022)
Author(s)

A.M.A. Balayn (TU Delft - Web Information Systems)

N. Rikalo (TU Delft - Human-Centred Artificial Intelligence)

C. Lofi (TU Delft - Web Information Systems)

J. Yang (TU Delft - Web Information Systems)

A. Bozzon (TU Delft - Human-Centred Artificial Intelligence, TU Delft - Web Information Systems)

Research Group
Web Information Systems
Copyright
© 2022 A.M.A. Balayn, N. Rikalo, C. Lofi, J. Yang, A. Bozzon
DOI related publication
https://doi.org/10.1145/3491102.3517474
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 A.M.A. Balayn, N. Rikalo, C. Lofi, J. Yang, A. Bozzon
Research Group
Web Information Systems
ISBN (electronic)
9781450391573
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Deep learning models for image classification suffer from dangerous issues often discovered after deployment. The process of identifying bugs that cause these issues remains limited and understudied. Especially, explainability methods are often presented as obvious tools for bug identification. Yet, the current practice lacks an understanding of what kind of explanations can best support the different steps of the bug identification process, and how practitioners could interact with those explanations. Through a formative study and an iterative co-creation process, we build an interactive design probe providing various potentially relevant explainability functionalities, integrated into interfaces that allow for flexible workflows. Using the probe, we perform 18 user-studies with a diverse set of machine learning practitioners. Two-thirds of the practitioners engage in successful bug identification. They use multiple types of explanations, e.g. visual and textual ones, through non-standardized sequences of interactions including queries and exploration. Our results highlight the need for interactive, guiding, interfaces with diverse explanations, shedding light on future research directions.