Privacy Attacks in Decentralized Learning Systems that Exchange Chunked Models

Robust Decentralized Learning

Bachelor Thesis (2025)
Author(s)

H. Betmezoglu (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

Jérémie Decouchant – Mentor (TU Delft - Data-Intensive Systems)

B.A. Cox – Mentor (TU Delft - Data-Intensive Systems)

Anna Lukina – Graduation committee member (TU Delft - Algorithmics)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
27-06-2025
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Decentralized Learning (DL) is a key tool for training machine learning models on sensitive, distributed data. However, peer-to-peer model exchange in DL systems exposes participants to privacy attacks. Existing defenses often degrade model utility, introduce communication overhead, or are not applicable to a DL system. Model chunking (splitting a model into parts before sharing them individually) has been proposed as an alternative, but its standalone privacy implications have not been investigated. This work implements and evaluates three distinct model chunking methods (static, cyclic, and random) against two privacy attacks: membership inference and linkability. Our work introduces a Hungarian matching-based enhancement to the linkability attack, and relaxes prior assumptions by evaluating attack success under limited access to neighbor datasets. Our results show that chunking increases vulnerability to membership inference. However, static and random chunking are effective against linkability attacks under specific conditions, particularly when full epochs are used during training.

Files

PrivacyDL_hbetmezoglu.pdf
(pdf | 0.893 Mb)
License info not available