Searched for: +
(1 - 3 of 3)
document
Balayn, A.M.A. (author), Rikalo, N. (author), Lofi, C. (author), Yang, J. (author), Bozzon, A. (author)
Deep learning models for image classification suffer from dangerous issues often discovered after deployment. The process of identifying bugs that cause these issues remains limited and understudied. Especially, explainability methods are often presented as obvious tools for bug identification. Yet, the current practice lacks an understanding...
conference paper 2022
document
Balayn, A.M.A. (author), SOILIS, P. (author), Lofi, C. (author), Yang, J. (author), Bozzon, A. (author)
Global interpretability is a vital requirement for image classification applications. Existing interpretability methods mainly explain a model behavior by identifying salient image patches, which require manual efforts from users to make sense of, and also do not typically support model validation with questions that investigate multiple...
conference paper 2021
document
Mesbah, S. (author), Yang, J. (author), Sips, R.H.J. (author), Valle Torre, M. (author), Lofi, C. (author), Bozzon, A. (author), Houben, G.J.P.M. (author)
Social media provides a timely yet challenging data source for adverse drug reaction (ADR) detection. Existing dictionary-based, semi-supervised learning approaches are intrinsically limited by the coverage and maintainability of laymen health vocabularies. In this paper, we introduce a data augmentation approach that leverages variational...
conference paper 2019