Client-Level Unlearning in Decentralized Learning

Bachelor Thesis (2025)
Author(s)

R.D. Dinu (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Jérémie Decouchant – Mentor (TU Delft - Data-Intensive Systems)

Bart Cox – Mentor (TU Delft - Data-Intensive Systems)

Anna Lukina – Graduation committee member (TU Delft - Algorithmics)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
27-06-2025
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Decentralized Learning is becoming increasingly popular due to its ability to protect user privacy and scale across large distributed systems. However, when clients leave the network, either by choice or due to failure, their past contributions remain in the model. This raises privacy concerns and may violate the right to be forgotten. In some applications, it is also undesirable to retain the outdated influence of clients that no longer reflect the current state of the system. While Machine Unlearning has seen significant progress in Federated Learning, similar solutions for Decentralized Learning are limited because there is no central server to orchestrate these operations. This work adapts and extends a state-of-the-art Federated Unlearning algorithm, QuickDrop, to operate in a decentralized setting. Our method uses synthetic data to reverse the influence of dropped clients and efficiently restore the model’s generalization performance. It also supports unannounced client crashes and performs reliably in sparse network topologies. We evaluate the algorithm on MNIST and CIFAR-10 using different graph structures and show that it remains competitive with oracle baselines that require access to sensitive data. Finally, we discuss the limitations of our approach and suggest directions for future work in Decentralized Unlearning.

Files

License info not available