Searched for: +
(1 - 4 of 4)
document
Bartlett, A.J. (author), Liem, C.C.S. (author), Panichella, A. (author)
Deep learning (DL) models are known to be highly accurate, yet vulnerable to adversarial examples. While earlier research focused on generating adversarial examples using whitebox strategies, later research focused on black-box strategies, as models often are not accessible to external attackers. Prior studies showed that black-box approaches...
conference paper 2023
document
Panichella, A. (author), Liem, C.C.S. (author)
Mutation testing is a well-established technique for assessing a test suite’s quality by injecting artificial faults into production code. In recent years, mutation testing has been extended to machine learning (ML) systems, and deep learning (DL) in particular; researchers have proposed approaches, tools, and statistically sound heuristics to...
conference paper 2021
document
Yildiz, B. (author), Hung, H.S. (author), Krijthe, J.H. (author), Liem, C.C.S. (author), Loog, M. (author), Migut, M.A. (author), Oliehoek, F.A. (author), Panichella, A. (author), Pawełczak, Przemysław (author), Picek, S. (author), de Weerdt, M.M. (author), van Gemert, J.C. (author)
We present ReproducedPapers.org : an open online repository for teaching and structuring machine learning reproducibility. We evaluate doing a reproduction project among students and the added value of an online reproduction repository among AI researchers. We use anonymous self-assessment surveys and obtained 144 responses. Results suggest...
conference paper 2021
document
Liem, C.C.S. (author), Panichella, A. (author)
The rise in popularity of machine learning (ML), and deep learning in particular, has both led to optimism about achievements of artificial intelligence, as well as concerns about possible weaknesses and vulnerabilities of ML pipelines. Within the software engineering community, this has led to a considerable body of work on ML testing...
conference paper 2020
Searched for: +
(1 - 4 of 4)