Byzantine Attacks and Defenses in Decentralized Learning Systems that Exchange Chunked Models

Robust Decentralized Learning

Bachelor Thesis (2025)
Author(s)

A.H. Donev (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Jérémie Decouchant – Mentor (TU Delft - Data-Intensive Systems)

Anna Lukina – Graduation committee member (TU Delft - Algorithmics)

B.A. Cox – Mentor (TU Delft - Data-Intensive Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
25-06-2025
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Decentralized learning (DL) enables collaborative model training in a distributed fashion without a central server, increasing resilience but still remaining vulnerable to the same adversarial model attacks, that Federated Learning is vulnerable to. In this paper, we test two defense mechanisms, specifically designed to protect fully decentralized learning, in the scenario of chunked models and introduce one new defense. We clearly define the threat model, where malicious peers either try to perform a backdoor attack or an untargeted label flipping. The defenses are Norm Clipping, Sentinel and we propose Adaptive Norm Clipping. We evaluate the effectiveness of these defenses on the CIFAR-10 dataset. Our results indicate that the model attacks (backdoor attack and untargeted label flipping) significantly harm model accuracy on vanilla DL with chunking. While static defenses like Norm Clipping and Adaptive Norm Clipping reduce the impact of the attack, they lower the final test average accuracy in the chunked scenario. Also, robust aggregators like Sentinel fail at mitigating the attack, but do not lower the average test accuracy.

Files

Cse3000-adonev-thesis.pdf
(pdf | 1.07 Mb)
License info not available