Byzantine Attacks and Defenses in Decentralized Learning Systems that Exchange Chunked Models
Robust Decentralized Learning
A.H. Donev (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Jérémie Decouchant – Mentor (TU Delft - Data-Intensive Systems)
Anna Lukina – Graduation committee member (TU Delft - Algorithmics)
B.A. Cox – Mentor (TU Delft - Data-Intensive Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Decentralized learning (DL) enables collaborative model training in a distributed fashion without a central server, increasing resilience but still remaining vulnerable to the same adversarial model attacks, that Federated Learning is vulnerable to. In this paper, we test two defense mechanisms, specifically designed to protect fully decentralized learning, in the scenario of chunked models and introduce one new defense. We clearly define the threat model, where malicious peers either try to perform a backdoor attack or an untargeted label flipping. The defenses are Norm Clipping, Sentinel and we propose Adaptive Norm Clipping. We evaluate the effectiveness of these defenses on the CIFAR-10 dataset. Our results indicate that the model attacks (backdoor attack and untargeted label flipping) significantly harm model accuracy on vanilla DL with chunking. While static defenses like Norm Clipping and Adaptive Norm Clipping reduce the impact of the attack, they lower the final test average accuracy in the chunked scenario. Also, robust aggregators like Sentinel fail at mitigating the attack, but do not lower the average test accuracy.