Searched for: +
(1 - 4 of 4)
document
Xu, J. (author), Picek, S. (author)
Graph Neural Networks (GNNs) have achieved impressive results in various graph learning tasks. They have found their way into many applications, such as fraud detection, molecular property prediction, or knowledge graph reasoning. However, GNNs have been recently demonstrated to be vulnerable to backdoor attacks. In this work, we explore a...
conference paper 2022
document
Koffas, S. (author), Xu, J. (author), Conti, M. (author), Picek, S. (author)
This work explores backdoor attacks for automatic speech recognition systems where we inject inaudible triggers. By doing so, we make the backdoor attack challenging to detect for legitimate users and, consequently, potentially more dangerous. We conduct experiments on two versions of a speech dataset and three neural networks and explore the...
conference paper 2022
document
Xu, J. (author), Wang, R. (author), Koffas, S. (author), Liang, K. (author), Picek, S. (author)
Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information. GNNs have recently become a widely used graph analysis method due to their superior ability to learn representations for complex graph data. Due to privacy concerns and regulation restrictions, centralized GNNs can be difficult to...
conference paper 2022
document
Xu, J. (author), Xue, Minhui (author), Picek, S. (author)
Backdoor attacks represent a serious threat to neural network models. A backdoored model will misclassify the trigger-embedded inputs into an attacker-chosen target label while performing normally on other benign inputs. There are already numerous works on backdoor attacks on neural networks, but only a few works consider graph neural...
conference paper 2021
Searched for: +
(1 - 4 of 4)