FedCmp
Byzantine-robust federated learning through clustering model update parameters
Hua Wang (Qufu Normal University)
Shaoxiong Wang (Qufu Normal University)
R. Wang (TU Delft - Cyber Security)
Pengxiang Wang (Jinan Foreign Language School)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Federated Learning (FL) is a distributed machine learning paradigm that enables multiple clients to collaboratively train a model without sharing their private data. However, its distributed nature makes FL vulnerable to Byzantine attacks. Most existing Byzantine-robust FL schemes have limitations, such as ineffective defense against well-crafted malicious updates or degraded performance in non-independent and identically distributed (non-IID) data scenarios. To address these challenges, we propose FedCmp, a robust FL framework with an anomaly detection mechanism. Our approach identifies malicious updates by leveraging a significant disparity in vote counts between benign and compromised clients. We first propose a clustering strategy for update parameters, followed by the implementation of a multi-round voting mechanism to accelerate vote accumulation for benign or compromised clients based on parameter diversity. Finally, following the majority principle, malicious updates are accurately filtered out without compromising the contributions of benign clients. Experimental results demonstrate that FedCmp outperforms existing robust FL schemes and maintains high accuracy even in highly non-IID data scenarios.
Files
File under embargo until 03-12-2025