FLAB

Exploring anomaly bias in backdoor attacks

Journal Article (2026)
Author(s)

Hua Wang (Qufu Normal University)

Shaoxiong Wang (Qufu Normal University)

Lianhua Wang (Qufu Normal University)

Rui Wang (TU Delft - Data-Intensive Systems)

Research Group
Data-Intensive Systems
DOI related publication
https://doi.org/10.1016/j.eswa.2025.130415
More Info
expand_more
Publication Year
2026
Language
English
Research Group
Data-Intensive Systems
Volume number
300
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Federated learning (FL) allows multiple parties to collaboratively train machine learning models by uploading model updates instead of raw data, thereby protecting data privacy and reducing communication overhead. However, the open nature of public networks makes them vulnerable to attacks. By injecting poisoned samples with backdoor triggers during training and uploading malicious updates, an attacker can manipulate the global model to produce any specified target label. Existing defenses against backdoor attacks have limitations, such as high attack success rates or the need to know or restrict the number of compromised clients controlled by the attacker. To address these shortcomings, we propose FLAB, a novel defense to filter out malicious updates. Specifically, we introduce the concept of anomaly bias to characterize each model update and propose a detection mechanism to quantify their anomalous degrees. By clustering anomaly biases and iteratively reducing the size of the cluster, the anomaly bias associated with the attacker is identified. Finally, all updates with this bias are considered malicious and removed. We conduct exhaustive evaluations of FLAB. Experimental results demonstrate that, compared to existing defenses, FLAB achieves comparable model accuracy while significantly reducing attack success rates. Furthermore, FLAB maintains robust performance even when the number of compromised clients exceeds 80 %.

Files

Taverne
warning

File under embargo until 15-05-2026