Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization

Journal Article (2025)
Authors

Wenrui Yu (CISPA Helmholtz Center for Information Security)

Qiongxiu Li (Aalborg University)

Milan Lopuhaa-Zwakenberg (University of Twente)

Mads Græsbøll Christensen (Aalborg University)

R. Heusdens (TU Delft - Signal Processing Systems, Netherlands Defence Academy)

Research Group
Signal Processing Systems
To reference this document use:
https://doi.org/10.1109/TIFS.2024.3516564
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Signal Processing Systems
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Volume number
20
Pages (from-to)
822-838
DOI:
https://doi.org/10.1109/TIFS.2024.3516564
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Federated learning (FL) emerged as a paradigm designed to improve data privacy by enabling data to reside at its source, thus embedding privacy as a core consideration in FL architectures, whether centralized or decentralized. Contrasting with recent findings by Pasquini et al., which suggest that decentralized FL does not empirically offer any additional privacy or security benefits over centralized models, our study provides compelling evidence to the contrary. We demonstrate that decentralized FL, when deploying distributed optimization, provides enhanced privacy protection - both theoretically and empirically - compared to centralized approaches. The challenge of quantifying privacy loss through iterative processes has traditionally constrained the theoretical exploration of FL protocols. We overcome this by conducting a pioneering in-depth information-theoretical privacy analysis for both frameworks. Our analysis, considering both eavesdropping and passive adversary models, successfully establishes bounds on privacy leakage. In particular, we show information theoretically that the privacy loss in decentralized FL is upper bounded by the loss in centralized FL. Compared to the centralized case where local gradients of individual participants are directly revealed, a key distinction of optimization-based decentralized FL is that the relevant information includes differences of local gradients over successive iterations and the aggregated sum of different nodes' gradients over the network. This information complicates the adversary's attempt to infer private data. To bridge our theoretical insights with practical applications, we present detailed case studies involving logistic regression and deep neural networks. These examples demonstrate that while privacy leakage remains comparable in simpler models, complex models like deep neural networks exhibit lower privacy risks under decentralized FL. Extensive numerical tests further validate that decentralized FL is more resistant to privacy attacks, aligning with our theoretical findings.

Files

License info not available
warning

File under embargo until 23-06-2025