Searched for: +
(1 - 8 of 8)
document
Sharifi Noorian, S. (author), Qiu, S. (author), Sayin, Burcu (author), Balayn, A.M.A. (author), Gadiraju, Ujwal (author), Yang, J. (author), Bozzon, A. (author)
High-quality data plays a vital role in developing reliable image classification models. Despite that, what makes an image difficult to classify remains an unstudied topic. This paper provides a first-of-its-kind, model-agnostic characterization of image atypicality based on human understanding. We consider the setting of image classification...
conference paper 2023
document
Balayn, A.M.A. (author), Yurrita Semperena, M. (author), Yang, J. (author), Gadiraju, Ujwal (author)
Fairness toolkits are developed to support machine learning (ML) practitioners in using algorithmic fairness metrics and mitigation methods. Past studies have investigated practical challenges for toolkit usage, which are crucial to understanding how to support practitioners. However, the extent to which fairness toolkits impact practitioners’...
conference paper 2023
document
Balayn, A.M.A. (author), Rikalo, N. (author), Yang, J. (author), Bozzon, A. (author)
Handling failures in computer vision systems that rely on deep learning models remains a challenge. While an increasing number of methods for bug identification and correction are proposed, little is known about how practitioners actually search for failures in these models. We perform an empirical study to understand the goals and needs of...
conference paper 2023
document
He, G. (author), Balayn, A.M.A. (author), Buijsman, S.N.R. (author), Yang, J. (author), Gadiraju, Ujwal (author)
With recent advances in explainable artificial intelligence (XAI), researchers have started to pay attention to concept-level explanations, which explain model predictions with a high level of abstraction. However, such explanations may be difficult to digest for laypeople due to the potential knowledge gap and the concomitant cognitive load....
conference paper 2022
document
Balayn, A.M.A. (author), He, G. (author), Hu, Andrea (author), Yang, J. (author), Gadiraju, Ujwal (author)
Access to commonsense knowledge is receiving renewed interest for developing neuro-symbolic AI systems, or debugging deep learning models. Little is currently understood about the types of knowledge that can be gathered using existing knowledge elicitation methods. Moreover, these methods fall short of meeting the evolving requirements of...
conference paper 2022
document
Balayn, A.M.A. (author), Rikalo, N. (author), Lofi, C. (author), Yang, J. (author), Bozzon, A. (author)
Deep learning models for image classification suffer from dangerous issues often discovered after deployment. The process of identifying bugs that cause these issues remains limited and understudied. Especially, explainability methods are often presented as obvious tools for bug identification. Yet, the current practice lacks an understanding...
conference paper 2022
document
Balayn, A.M.A. (author), Yang, J. (author), Szlávik, Zoltán (author), Bozzon, A. (author)
The automatic detection of conflictual languages (harmful, aggressive, abusive, and offensive languages) is essential to provide a healthy conversation environment on the Web. To design and develop detection systems that are capable of achieving satisfactory performance, a thorough understanding of the nature and properties of the targeted type...
journal article 2021
document
Balayn, A.M.A. (author), SOILIS, P. (author), Lofi, C. (author), Yang, J. (author), Bozzon, A. (author)
Global interpretability is a vital requirement for image classification applications. Existing interpretability methods mainly explain a model behavior by identifying salient image patches, which require manual efforts from users to make sense of, and also do not typically support model validation with questions that investigate multiple...
conference paper 2021
Searched for: +
(1 - 8 of 8)