Robustness Against Untargeted Attacks of Multi-Server Federated Learning for Image Classification

Are Defenses Based on Existing Methods Effective?

More Info
expand_more

Abstract

Multi-Server Federated Learning (MSFL) is a decentralised way to train a global model, taking a significant step toward enhanced privacy preservation while minimizing communication costs through the use of edge servers with overlapping reaches. In this context, the FedMes algorithm facilitates the aggregation of gradients, contributing to the convergence of the global model. Attacks that aim to reduce the accuracy of the global model are called untargeted attacks. One such attack that is particularly difficult to detect is the Min-Max attack. This paper explores the extension of existing defenses to enhance the robustness of MSFL against the Min-Max attack.

To do this, existing state-of-the-art defenses, including Median, Krum, Multi-Krum, Trimmed-Mean, Bulyan and DnC are extended and examined for their adaptability to this context. We refer to the extended versions of these defenses as FMes-Defenses.

Our results indicate that FMes-Defenses are ineffective in preventing the Min-Max attack from diminishing the accuracy of the global model. Surprisingly, we find even FMes-DnC is inadequate despite it's Single-Server counterpart (DnC) being renowned for mitigating the Min-Max attack.

These findings emphasise the need for novel defenses specifically tailored to the nuances of MSFL. While representing a significant stride in communication efficiency, MSFL, complemented by the FedMes algorithm, may require additional measures to ensure robust security against sophisticated untargeted attacks. This research contributes valuable insights into the challenges and importance of enhancing the security of MSFL in its ongoing development.

Files