Poster

Clean-label Backdoor Attack on Graph Neural Networks

Conference Paper (2022)
Author(s)

Jing Xu (TU Delft - Cyber Security)

Stjepan Picek (Radboud Universiteit Nijmegen, TU Delft - Cyber Security)

Research Group
Cyber Security
Copyright
© 2022 J. Xu, S. Picek
DOI related publication
https://doi.org/10.1145/3548606.3563531
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 J. Xu, S. Picek
Research Group
Cyber Security
Pages (from-to)
3491-3493
ISBN (electronic)
9781450394505
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Graph Neural Networks (GNNs) have achieved impressive results in various graph learning tasks. They have found their way into many applications, such as fraud detection, molecular property prediction, or knowledge graph reasoning. However, GNNs have been recently demonstrated to be vulnerable to backdoor attacks. In this work, we explore a new kind of backdoor attack, i.e., a clean-label backdoor attack, on GNNs. Unlike prior backdoor attacks on GNNs in which the adversary can introduce arbitrary, often clearly mislabeled, inputs to the training set, in a clean-label backdoor attack, the resulting poisoned inputs appear to be consistent with their label and thus are less likely to be filtered as outliers. The initial experimental results illustrate that the adversary can achieve a high attack success rate (up to 98.47%) with a clean-label backdoor attack on GNNs for the graph classification task. We hope our work will raise awareness of this attack and inspire novel defenses against it.