Searched for: contributor%3A%22Roos%2C+S.+%28mentor%29%22
(1 - 1 of 1)
document
Vigilanza Lorenzo, Pietro (author)
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to regular input to elicit wrong output on a given model. Plenty of adversarial attacks assume an attacker has access to the underlying model or access to the data used to train the model. Instead, in this paper we focus on the effects the data...
bachelor thesis 2022