Searched for: subject%3A%22Adversarial%255C%252Battacks%22
(1 - 7 of 7)
document
feng, Clio (author)
Recently, while gaze estimation has gained a substantial improvement by using deep learning models, research had shown that neural networks are weak against adversarial attacks. Despite researchers has been done numerous on adversarial training, there are little to no studies on adversarial training in gaze estimation. Therefore, the objective...
bachelor thesis 2023
document
Afriat, Eliott (author)
We seek to examine the vulnerability of BERT-based fact-checking. We implement a gradient based, adversarial attack strategy, based on Hotflip swapping individual tokens from the input. We use this on a pre-trained ExPred model for fact-checking. We find that gradient based adversarial attacks are ineffective against ExPred. Uncertainties about...
bachelor thesis 2023
document
Psathas, Steffano (author)
A machine learning classifier can be tricked us- ing adversarial attacks, attacks that alter images slightly to make the target model misclassify the image. To create adversarial attacks on black-box classifiers, a substitute model can be created us- ing model stealing. The research question this re- port address is the topic of using model...
bachelor thesis 2022
document
Vigilanza Lorenzo, Pietro (author)
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to regular input to elicit wrong output on a given model. Plenty of adversarial attacks assume an attacker has access to the underlying model or access to the data used to train the model. Instead, in this paper we focus on the effects the data...
bachelor thesis 2022
document
Dwivedi, Kanish (author)
Adversarial training and its variants have become the standard defense against adversarial attacks - perturbed inputs designed to fool the model. Boosting techniques such as Adaboost have been successful for binary classification problems, however, there is limited research in the application of them for providing adversarial robustness. In this...
bachelor thesis 2022
document
van Veen, Floris (author)
Model extraction attacks are attacks which generate a substitute model of a targeted victim neural network. It is possible to perform these attacks without a preexisting dataset, but doing so requires a very high number of queries to be sent to the victim model. This is otfen in the realm of several million queries. The more difficult the...
bachelor thesis 2022
document
Mertzanis, Nick (author)
Convolutional Neural Networks are particularly vulnerable to attacks that manipulate their data, which are usually called adversarial attacks. In this paper, a method of filtering images using the Fast Fourier Transform is explored, along with its potential to be used as a defense mechanism to such attacks. The main contribution that differs...
bachelor thesis 2021
Searched for: subject%3A%22Adversarial%255C%252Battacks%22
(1 - 7 of 7)