Federated Learning (FL) is a distributed machine learning paradigm that enables multiple clients to collaboratively train a model without sharing their private data. However, its distributed nature makes FL vulnerable to Byzantine attacks. Most existing Byzantine-robust FL scheme
...
Federated Learning (FL) is a distributed machine learning paradigm that enables multiple clients to collaboratively train a model without sharing their private data. However, its distributed nature makes FL vulnerable to Byzantine attacks. Most existing Byzantine-robust FL schemes have limitations, such as ineffective defense against well-crafted malicious updates or degraded performance in non-independent and identically distributed (non-IID) data scenarios. To address these challenges, we propose FedCmp, a robust FL framework with an anomaly detection mechanism. Our approach identifies malicious updates by leveraging a significant disparity in vote counts between benign and compromised clients. We first propose a clustering strategy for update parameters, followed by the implementation of a multi-round voting mechanism to accelerate vote accumulation for benign or compromised clients based on parameter diversity. Finally, following the majority principle, malicious updates are accurately filtered out without compromising the contributions of benign clients. Experimental results demonstrate that FedCmp outperforms existing robust FL schemes and maintains high accuracy even in highly non-IID data scenarios.