Searched for: %2520
(1 - 2 of 2)
document
Balayn, A.M.A. (author), Rikalo, N. (author), Lofi, C. (author), Yang, J. (author), Bozzon, A. (author)
Deep learning models for image classification suffer from dangerous issues often discovered after deployment. The process of identifying bugs that cause these issues remains limited and understudied. Especially, explainability methods are often presented as obvious tools for bug identification. Yet, the current practice lacks an understanding...
conference paper 2022
document
Balayn, A.M.A. (author), SOILIS, P. (author), Lofi, C. (author), Yang, J. (author), Bozzon, A. (author)
Global interpretability is a vital requirement for image classification applications. Existing interpretability methods mainly explain a model behavior by identifying salient image patches, which require manual efforts from users to make sense of, and also do not typically support model validation with questions that investigate multiple...
conference paper 2021