On the Privacy Bound of Distributed Optimization and its Application in Federated Learning
Qiongxiu Li (Tsinghua University)
Milan Lopuhaä-Zwakenberg (University of Twente)
Wangyang Yu (TU Delft - Signal Processing Systems)
R Heusdens (TU Delft - Signal Processing Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Analyzing privacy leakage in distributed algorithms is challenging as it is difficult to track the information leakage across different iterations. In this paper, we take the first step to conduct a theoretical analysis of the information flow in distributed optimization ensuring that gradients at every iteration remain concealed from others. Specifically, we derive a privacy bound on the minimum information available to the adversary when the optimization accuracy is kept uncompromised. By analyzing the derived bound we show that the privacy leakage depends heavily on the optimization objectives, especially the linearity of the system. To understand how the bound affects privacy, we consider two canonical federated learning (FL) applications including linear regression and neural networks. We find that in the first case protecting the gradients alone is inadequate for protecting the private data, as the established bound potentially exposes all sensitive information. For more complex applications such as neural networks, protecting the gradients can provide certain privacy advantages as it will be more difficult for the adversary to infer the private inputs. Numerical validations are presented to consolidate our theoretical results.