On the Privacy Bound of Distributed Optimization and its Application in Federated Learning

Conference Paper (2024)
Author(s)

Qiongxiu Li (Tsinghua University)

Milan Lopuhaä-Zwakenberg (University of Twente)

Wangyang Yu (TU Delft - Signal Processing Systems)

R Heusdens (TU Delft - Signal Processing Systems)

Research Group
Signal Processing Systems
DOI related publication
https://doi.org/10.23919/EUSIPCO63174.2024.10715187
More Info
expand_more
Publication Year
2024
Language
English
Research Group
Signal Processing Systems
Bibliographical Note
Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public. @en
Pages (from-to)
2232-2236
ISBN (electronic)
9789464593617
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Analyzing privacy leakage in distributed algorithms is challenging as it is difficult to track the information leakage across different iterations. In this paper, we take the first step to conduct a theoretical analysis of the information flow in distributed optimization ensuring that gradients at every iteration remain concealed from others. Specifically, we derive a privacy bound on the minimum information available to the adversary when the optimization accuracy is kept uncompromised. By analyzing the derived bound we show that the privacy leakage depends heavily on the optimization objectives, especially the linearity of the system. To understand how the bound affects privacy, we consider two canonical federated learning (FL) applications including linear regression and neural networks. We find that in the first case protecting the gradients alone is inadequate for protecting the private data, as the established bound potentially exposes all sensitive information. For more complex applications such as neural networks, protecting the gradients can provide certain privacy advantages as it will be more difficult for the adversary to infer the private inputs. Numerical validations are presented to consolidate our theoretical results.

Files

On_the_Privacy_Bound_of_Distri... (pdf)
(pdf | 0.909 Mb)
- Embargo expired in 23-04-2025
License info not available