Privacy Attacks in Decentralized Learning Systems that Exchange Chunked Models
Robust Decentralized Learning
H. Betmezoglu (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Jérémie Decouchant – Mentor (TU Delft - Data-Intensive Systems)
B.A. Cox – Mentor (TU Delft - Data-Intensive Systems)
Anna Lukina – Graduation committee member (TU Delft - Algorithmics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Decentralized Learning (DL) is a key tool for training machine learning models on sensitive, distributed data. However, peer-to-peer model exchange in DL systems exposes participants to privacy attacks. Existing defenses often degrade model utility, introduce communication overhead, or are not applicable to a DL system. Model chunking (splitting a model into parts before sharing them individually) has been proposed as an alternative, but its standalone privacy implications have not been investigated. This work implements and evaluates three distinct model chunking methods (static, cyclic, and random) against two privacy attacks: membership inference and linkability. Our work introduces a Hungarian matching-based enhancement to the linkability attack, and relaxes prior assumptions by evaluating attack success under limited access to neighbor datasets. Our results show that chunking increases vulnerability to membership inference. However, static and random chunking are effective against linkability attacks under specific conditions, particularly when full epochs are used during training.