Searched for: +
(1 - 5 of 5)
document
Sharifi Noorian, S. (author), Qiu, S. (author), Sayin, Burcu (author), Balayn, A.M.A. (author), Gadiraju, Ujwal (author), Yang, J. (author), Bozzon, A. (author)
High-quality data plays a vital role in developing reliable image classification models. Despite that, what makes an image difficult to classify remains an unstudied topic. This paper provides a first-of-its-kind, model-agnostic characterization of image atypicality based on human understanding. We consider the setting of image classification...
conference paper 2023
document
Balayn, A.M.A. (author), Rikalo, N. (author), Yang, J. (author), Bozzon, A. (author)
Handling failures in computer vision systems that rely on deep learning models remains a challenge. While an increasing number of methods for bug identification and correction are proposed, little is known about how practitioners actually search for failures in these models. We perform an empirical study to understand the goals and needs of...
conference paper 2023
document
Balayn, A.M.A. (author), Rikalo, N. (author), Lofi, C. (author), Yang, J. (author), Bozzon, A. (author)
Deep learning models for image classification suffer from dangerous issues often discovered after deployment. The process of identifying bugs that cause these issues remains limited and understudied. Especially, explainability methods are often presented as obvious tools for bug identification. Yet, the current practice lacks an understanding...
conference paper 2022
document
Balayn, A.M.A. (author), Yang, J. (author), Szlávik, Zoltán (author), Bozzon, A. (author)
The automatic detection of conflictual languages (harmful, aggressive, abusive, and offensive languages) is essential to provide a healthy conversation environment on the Web. To design and develop detection systems that are capable of achieving satisfactory performance, a thorough understanding of the nature and properties of the targeted type...
journal article 2021
document
Balayn, A.M.A. (author), SOILIS, P. (author), Lofi, C. (author), Yang, J. (author), Bozzon, A. (author)
Global interpretability is a vital requirement for image classification applications. Existing interpretability methods mainly explain a model behavior by identifying salient image patches, which require manual efforts from users to make sense of, and also do not typically support model validation with questions that investigate multiple...
conference paper 2021
Searched for: +
(1 - 5 of 5)