LADDER: Multi-Objective Backdoor Attack via Evolutionary Algorithm

Conference Paper (2025)
Author(s)

Dazhuang Liu (TU Delft - Cyber Security)

Yanqi Qiao (TU Delft - Cyber Security)

Rui Wang (TU Delft - Cyber Security)

Kaitai Liang (TU Delft - Cyber Security)

G. Smaragdakis (TU Delft - Cyber Security)

Research Group
Cyber Security
DOI related publication
https://doi.org/10.14722/ndss.2025.241061
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Cyber Security
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Current black-box backdoor attacks in convolutional neural networks formulate attack objective(s) as singleobjective optimization problems in single domain. Designing triggers in single domain harms semantics and trigger robustness as well as introduces visual and spectral anomaly. This work proposes a multi-objective black-box backdoor attack in dual domains via evolutionary algorithm (LADDER), the first instance of achieving multiple attack objectives simultaneously by optimizing triggers without requiring prior knowledge about victim model. In particular, we formulate LADDER as a multiobjective optimization problem (MOP) and solve it via multiobjective evolutionary algorithm (MOEA). MOEA maintains a population of triggers with trade-offs among attack objectives and uses non-dominated sort to drive triggers toward optimal solutions. We further apply preference-based selection to MOEA to exclude impractical triggers. LADDER investigates a new dualdomain perspective for trigger stealthiness by minimizing the anomaly between clean and poisoned samples in the spectral domain. Lastly, the robustness against preprocessing operations is achieved by pushing triggers to low-frequency regions. Extensive experiments comprehensively showcase that LADDER achieves attack effectiveness of at least 99%, attack robustness with 90.23% (50.09% higher than state-of-the-art attacks on average), superior natural stealthiness (1.12× to 196.74× improvement) and excellent spectral stealthiness (8.45× enhancement) as compared to current stealthy attacks by the average l2-norm across 5 public datasets.

Files

2025-1061-paper-1.pdf
(pdf | 3.56 Mb)
License info not available