Searched for: subject%3A%22Adversarial%255C+attacks%22
(1 - 15 of 15)
document
Pigmans, Max (author)
Most of the adversarial attacks suitable for attacking decision tree ensembles work by doing multiple local searches from randomly selected starting points, around the to be attacked victim. In this thesis we investigate the impact of these starting points on the performance of the attack, and find that the starting points significantly impact...
master thesis 2024
document
Gao, Yuxing (author)
The rapid advancement in autonomous driving technology underscores the importance of studying the fragility of perception systems in autonomous vehicles, particularly due to their profound impact on public transportation safety. These systems are of paramount importance due to their direct impact on the lives of passengers and pedestrians....
master thesis 2023
document
feng, Clio (author)
Recently, while gaze estimation has gained a substantial improvement by using deep learning models, research had shown that neural networks are weak against adversarial attacks. Despite researchers has been done numerous on adversarial training, there are little to no studies on adversarial training in gaze estimation. Therefore, the objective...
bachelor thesis 2023
document
Afriat, Eliott (author)
We seek to examine the vulnerability of BERT-based fact-checking. We implement a gradient based, adversarial attack strategy, based on Hotflip swapping individual tokens from the input. We use this on a pre-trained ExPred model for fact-checking. We find that gradient based adversarial attacks are ineffective against ExPred. Uncertainties about...
bachelor thesis 2023
document
Nowroozi, Ehsan (author), Mohammadi, Mohammadreza (author), Savas, Erkay (author), Mekdad, Yassine (author), Conti, M. (author)
In the past few years, Convolutional Neural Networks (CNN) have demonstrated promising performance in various real-world cybersecurity applications, such as network and multimedia security. However, the underlying fragility of CNN structures poses major security problems, making them inappropriate for use in security-oriented applications,...
journal article 2023
document
van Thiel, Erwin (author)
Abstract—Multi-label classification is an important branch of classification problems as in many real world classification scenarios an object can belong to multiple classes simultaneously. Deep learning based classifiers perform well at image classifica- tion but their predictions have shown to be unstable when subject to small input...
master thesis 2022
document
Psathas, Steffano (author)
A machine learning classifier can be tricked us- ing adversarial attacks, attacks that alter images slightly to make the target model misclassify the image. To create adversarial attacks on black-box classifiers, a substitute model can be created us- ing model stealing. The research question this re- port address is the topic of using model...
bachelor thesis 2022
document
Vigilanza Lorenzo, Pietro (author)
Machine Learning (ML) models are vulnerable to adversarial samples — human imperceptible changes to regular input to elicit wrong output on a given model. Plenty of adversarial attacks assume an attacker has access to the underlying model or access to the data used to train the model. Instead, in this paper we focus on the effects the data...
bachelor thesis 2022
document
Dwivedi, Kanish (author)
Adversarial training and its variants have become the standard defense against adversarial attacks - perturbed inputs designed to fool the model. Boosting techniques such as Adaboost have been successful for binary classification problems, however, there is limited research in the application of them for providing adversarial robustness. In this...
bachelor thesis 2022
document
van Veen, Floris (author)
Model extraction attacks are attacks which generate a substitute model of a targeted victim neural network. It is possible to perform these attacks without a preexisting dataset, but doing so requires a very high number of queries to be sent to the victim model. This is otfen in the realm of several million queries. The more difficult the...
bachelor thesis 2022
document
Wang, Yumeng (author), Lyu, Lijun (author), Anand, A. (author)
Contextual ranking models based on BERT are now well established for a wide range of passage and document ranking tasks. However, the robustness of BERT-based ranking models under adversarial inputs is under-explored. In this paper, we argue that BERT-rankers are not immune to adversarial attacks targeting retrieved documents given a query....
conference paper 2022
document
Wang, Z. (author), Loog, M. (author)
We illustrate the detrimental effect, such as overconfident decisions, that exponential behavior can have in methods like classical LDA and logistic regression. We then show how polynomiality can remedy the situation. This, among others, leads purposefully to random-level performance in the tails, away from the bulk of the training data. A...
conference paper 2022
document
Apruzzese, Giovanni (author), Conti, M. (author), Yuan, Ying (author)
Existing literature on adversarial Machine Learning (ML) focuses either on showing attacks that break every ML model, or defenses that withstand most attacks. Unfortunately, little consideration is given to the actual cost of the attack or the defense. Moreover, adversarial samples are often crafted in the "feature-space", making the...
conference paper 2022
document
Mertzanis, Nick (author)
Convolutional Neural Networks are particularly vulnerable to attacks that manipulate their data, which are usually called adversarial attacks. In this paper, a method of filtering images using the Fast Fourier Transform is explored, along with its potential to be used as a defense mechanism to such attacks. The main contribution that differs...
bachelor thesis 2021
document
Lelekas, Ioannis (author)
Biological vision adopts a coarse-to-fine information processing pathway, from initial visual detection and binding of salient features of a visual scene, to the enhanced and preferential processing given relevant stimuli. On the contrary, CNNs employ a fine-to-coarse processing, moving from local, edge-detecting filters to more global ones...
master thesis 2020
Searched for: subject%3A%22Adversarial%255C+attacks%22
(1 - 15 of 15)