Backdoors on Manifold Learning

Conference Paper (2024)
Author(s)

Christina Kreza (Radboud Universiteit Nijmegen)

Stefanos Koffas (TU Delft - Cyber Security)

Behrad Tajalli (Radboud Universiteit Nijmegen)

Mauro Conti (UniversitĂ  degli Studi di Padova, TU Delft - Cyber Security)

Stjepan Picek (TU Delft - Cyber Security, Radboud Universiteit Nijmegen)

Research Group
Cyber Security
DOI related publication
https://doi.org/10.1145/3649403.3656484
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Cyber Security
Pages (from-to)
1-7
ISBN (electronic)
9798400706028
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Recently, attackers have targeted machine learning systems, introducing various attacks. The backdoor attack is popular in this field and is usually realized through data poisoning. To the best of our knowledge, we are the first to investigate whether the backdoor attacks remain effective when manifold learning algorithms are applied to the poisoned dataset. We conducted our experiments using two manifold learning techniques (Autoencoder and UMAP) on two benchmark datasets (MNIST and CIFAR10) and two backdoor strategies (clean and dirty label). We performed an array of experiments using different parameters, finding that we could reach an attack success rate of 95% and 75% even after reducing our data to two dimensions using Autoencoders and UMAP, respectively.