- document
-
Afriat, Eliott (author)We seek to examine the vulnerability of BERT-based fact-checking. We implement a gradient based, adversarial attack strategy, based on Hotflip swapping individual tokens from the input. We use this on a pre-trained ExPred model for fact-checking. We find that gradient based adversarial attacks are ineffective against ExPred. Uncertainties about...bachelor thesis 2023
- document
-
van Thiel, Erwin (author)Abstract—Multi-label classification is an important branch of classification problems as in many real world classification scenarios an object can belong to multiple classes simultaneously. Deep learning based classifiers perform well at image classifica- tion but their predictions have shown to be unstable when subject to small input...master thesis 2022
- document
-
Psathas, Steffano (author)A machine learning classifier can be tricked us- ing adversarial attacks, attacks that alter images slightly to make the target model misclassify the image. To create adversarial attacks on black-box classifiers, a substitute model can be created us- ing model stealing. The research question this re- port address is the topic of using model...bachelor thesis 2022
- document
-
Vigilanza Lorenzo, Pietro (author)Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to regular input to elicit wrong output on a given model. Plenty of adversarial attacks assume an attacker has access to the underlying model or access to the data used to train the model. Instead, in this paper we focus on the effects the data...bachelor thesis 2022
- document
-
Dwivedi, Kanish (author)Adversarial training and its variants have become the standard defense against adversarial attacks - perturbed inputs designed to fool the model. Boosting techniques such as Adaboost have been successful for binary classification problems, however, there is limited research in the application of them for providing adversarial robustness. In this...bachelor thesis 2022
- document
-
van Veen, Floris (author)Model extraction attacks are attacks which generate a substitute model of a targeted victim neural network. It is possible to perform these attacks without a preexisting dataset, but doing so requires a very high number of queries to be sent to the victim model. This is otfen in the realm of several million queries. The more difficult the...bachelor thesis 2022
- document
-
Wang, Yumeng (author), Lyu, Lijun (author), Anand, A. (author)Contextual ranking models based on BERT are now well established for a wide range of passage and document ranking tasks. However, the robustness of BERT-based ranking models under adversarial inputs is under-explored. In this paper, we argue that BERT-rankers are not immune to adversarial attacks targeting retrieved documents given a query....conference paper 2022
- document
-
Wang, Z. (author), Loog, M. (author)We illustrate the detrimental effect, such as overconfident decisions, that exponential behavior can have in methods like classical LDA and logistic regression. We then show how polynomiality can remedy the situation. This, among others, leads purposefully to random-level performance in the tails, away from the bulk of the training data. A...conference paper 2022
- document
-
Apruzzese, Giovanni (author), Conti, M. (author), Yuan, Ying (author)Existing literature on adversarial Machine Learning (ML) focuses either on showing attacks that break every ML model, or defenses that withstand most attacks. Unfortunately, little consideration is given to the actual cost of the attack or the defense. Moreover, adversarial samples are often crafted in the "feature-space", making the...conference paper 2022
- document
-
Mertzanis, Nick (author)Convolutional Neural Networks are particularly vulnerable to attacks that manipulate their data, which are usually called adversarial attacks. In this paper, a method of filtering images using the Fast Fourier Transform is explored, along with its potential to be used as a defense mechanism to such attacks. The main contribution that differs...bachelor thesis 2021
- document
-
Lelekas, Ioannis (author)Biological vision adopts a coarse-to-fine information processing pathway, from initial visual detection and binding of salient features of a visual scene, to the enhanced and preferential processing given relevant stimuli. On the contrary, CNNs employ a fine-to-coarse processing, moving from local, edge-detecting filters to more global ones...master thesis 2020