Searched for: author%3A%22Vigilanza+Lorenzo%2C+Pietro%22
(1 - 1 of 1)
document
Vigilanza Lorenzo, Pietro (author)
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to regular input to elicit wrong output on a given model. Plenty of adversarial attacks assume an attacker has access to the underlying model or access to the data used to train the model. Instead, in this paper we focus on the effects the data...
bachelor thesis 2022