Searched for: subject%3A%22Backdoor%255C+Attacks%22
(1 - 13 of 13)
document
Xu, J. (author)
Deep Neural Networks (DNNs) have found extensive applications across diverse fields, such as image classification, speech recognition, and natural language processing. However, their susceptibility to various adversarial attacks, notably the backdoor attack, has repeatedly been demonstrated in recent years. <br/>The backdoor attack aims to...
doctoral thesis 2025
document
van der Meulen, Jan (author)
Federated learning (FL) is a privacy preserving machine learning approach which allows a machine learning model to be trained in a distributed fashion without ever sharing user data. Due to the large amount of valuable text and voice data stored on end-user devices, this approach works particularly well for natural language processing (NLP)...
bachelor thesis 2024
document
Simonov, Alex (author)
Machine learning, a pivotal aspect of artificial intelligence, has dramatically altered our interaction with technology and our handling of extensive data. Through its ability to learn and make decisions from patterns and previous experiences, machine learning is growing in influence on different aspects of our lives. It is, however, shown that...
master thesis 2024
document
Cai, Hanbo (author), Zhang, Pengcheng (author), Dong, Hai (author), Xiao, Yan (author), Koffas, S. (author), Li, Yiming (author)
Deep neural networks (DNNs) have been widely and successfully adopted and deployed in various applications of speech recognition. Recently, a few works revealed that these models are vulnerable to backdoor attacks, where the adversaries can implant malicious prediction behaviors into victim models by poisoning their training process. In this...
journal article 2024
document
Chen, Congwen (author)
Current backdoor attacks against federated learning (FL) strongly rely on universal triggers or semantic patterns, which can be easily detected and filtered by certain defense mechanisms such as norm clipping, comparing parameter divergences among local updates. In this work, we propose a new stealthy and robust backdoor attack with flexible...
master thesis 2023
document
Reda, Yuji (author)
Badnets are a type of backdoor attack that aims at manipulating the behavior of Convolutional Neural Networks. The training is modified such that when certain triggers appear in the inputs the CNN is going to behave accordingly. In this paper, we apply this type of backdoor attack to a regression task on gaze estimation. We examine different...
bachelor thesis 2023
document
Mercier, Arthur (author), Smolin, Nikita (author), Sihlovec, Oliver (author), Koffas, S. (author), Picek, S. (author)
Outsourced training and crowdsourced datasets lead to a new threat for deep learning models: the backdoor attack. In this attack, the adversary inserts a secret functionality in a model, activated through malicious inputs. Backdoor attacks represent an active research area due to diverse settings where they represent a real threat. Still,...
journal article 2023
document
Xu, J. (author), Abad, Gorka (author), Picek, S. (author)
Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the backdoored model will perform abnormally on inputs with predefined backdoor triggers and still retain state-of-the-art performance on the clean inputs. While...
conference paper 2023
document
Koffas, S. (author), Xu, J. (author), Conti, M. (author), Picek, S. (author)
This work explores backdoor attacks for automatic speech recognition systems where we inject inaudible triggers. By doing so, we make the backdoor attack challenging to detect for legitimate users and, consequently, potentially more dangerous. We conduct experiments on two versions of a speech dataset and three neural networks and explore the...
conference paper 2022
document
Xu, J. (author), Picek, S. (author)
Graph Neural Networks (GNNs) have achieved impressive results in various graph learning tasks. They have found their way into many applications, such as fraud detection, molecular property prediction, or knowledge graph reasoning. However, GNNs have been recently demonstrated to be vulnerable to backdoor attacks. In this work, we explore a...
conference paper 2022
document
Xu, J. (author), Wang, R. (author), Koffas, S. (author), Liang, K. (author), Picek, S. (author)
Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. GNNs have recently become a widely used graph analysis method due to their superior ability to learn representations for complex graph data. Due to privacy concerns and regulation restrictions, centralized GNNs can be difficult to...
conference paper 2022
document
Koffas, Stefanos (author)
Deep learning has made tremendous success in the past decade. As a result, it is becoming widely deployed in various safety and security-critical applications like autonomous driving, malware detection, fingerprint identification, and financial fraud detection. It was recently shown that deep neural networks are susceptible to multiple attacks...
master thesis 2021
document
Xu, J. (author), Xue, Minhui (author), Picek, S. (author)
Backdoor attacks represent a serious threat to neural network models. A backdoored model will misclassify the trigger-embedded inputs into an attacker-chosen target label while performing normally on other benign inputs. There are already numerous works on backdoor attacks on neural networks, but only a few works consider graph neural...
conference paper 2021
Searched for: subject%3A%22Backdoor%255C+Attacks%22
(1 - 13 of 13)