- document
-
Psathas, Steffano (author)A machine learning classifier can be tricked us- ing adversarial attacks, attacks that alter images slightly to make the target model misclassify the image. To create adversarial attacks on black-box classifiers, a substitute model can be created us- ing model stealing. The research question this re- port address is the topic of using model...bachelor thesis 2022
- document
-
van Veen, Floris (author)Model extraction attacks are attacks which generate a substitute model of a targeted victim neural network. It is possible to perform these attacks without a preexisting dataset, but doing so requires a very high number of queries to be sent to the victim model. This is otfen in the realm of several million queries. The more difficult the...bachelor thesis 2022