Decentralized Learning is becoming increasingly popular due to its ability to protect user privacy and scale across large distributed systems. However, when clients leave the network, either by choice or due to failure, their past contributions remain in the model. This raises pr
...
Decentralized Learning is becoming increasingly popular due to its ability to protect user privacy and scale across large distributed systems. However, when clients leave the network, either by choice or due to failure, their past contributions remain in the model. This raises privacy concerns and may violate the right to be forgotten. In some applications, it is also undesirable to retain the outdated influence of clients that no longer reflect the current state of the system. While Machine Unlearning has seen significant progress in Federated Learning, similar solutions for Decentralized Learning are limited because there is no central server to orchestrate these operations. This work adapts and extends a state-of-the-art Federated Unlearning algorithm, QuickDrop, to operate in a decentralized setting. Our method uses synthetic data to reverse the influence of dropped clients and efficiently restore the model’s generalization performance. It also supports unannounced client crashes and performs reliably in sparse network topologies. We evaluate the algorithm on MNIST and CIFAR-10 using different graph structures and show that it remains competitive with oracle baselines that require access to sensitive data. Finally, we discuss the limitations of our approach and suggest directions for future work in Decentralized Unlearning.