Analysis on the Vulnerability of Multi-Server Federated Learning Against Model Poisoning Attacks

More Info
expand_more

Abstract

Abstract— Federated Learning (FL) makes it possible for a network of clients to jointly train a machine learning model, while also keeping the training data private. There are several approaches when designing a FL network and while most existing research is focused on a single-server design, new and promising variations are arising that make use of multiple servers, witch have the benefit of speeding up the training process. Unfortunately single-server FL networks are prone to model poisoning attacks by malicious participants, that aim to reduce the accuracy of the trained model. This work showcases the inherent resilience of the multi-server design against existing state-of-the-art attacks tailored around single-server FL, as well as propose two novel attacks that exploits multi-server topology in order to reduce the required knowledge an adversary needs to obtain to carry out the attack, while still remaining effective. Main findings are as follows: In the event that the malicious party has compromised the entire network, existing single-server attacks are sufficient to completely prevent a model from training. If they are limited to knowledge available only within the local reach of their compromised clients, the effect is minimized to where the attacks might get mitigated without any defences being necessary. However in such cases a correlation can be observed between the location of the compromised clients and the effectiveness of an attack. The novel attacks proposed in this paper exploit this relation in order to remain sufficiently effective while requiring only the same amount of data necessary for the multi-server algorithm to function.