Privacy-Preserving Distributed Processing

Metrics, Bounds and Algorithms

Journal Article (2021)
Authors

Qiongxiu Li (Aalborg University)

Jaron Skovsted Gundersen (Aalborg University)

R Heusdens (Netherlands Defence Academy, TU Delft - Signal Processing Systems)

Mads Christensen (Aalborg University)

Research Group
Signal Processing Systems
Copyright
© 2021 Qiongxiu Li, Jaron Skovsted Gundersen, R. Heusdens, Mads Græsbøll Christensen
To reference this document use:
https://doi.org/10.1109/TIFS.2021.3050064
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Qiongxiu Li, Jaron Skovsted Gundersen, R. Heusdens, Mads Græsbøll Christensen
Research Group
Signal Processing Systems
Volume number
16
Pages (from-to)
2090-2103
DOI:
https://doi.org/10.1109/TIFS.2021.3050064
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Privacy-preserving distributed processing has recently attracted considerable attention. It aims to design solutions for conducting signal processing tasks over networks in a decentralized fashion without violating privacy. Many existing algorithms can be adopted to solve this problem such as differential privacy, secure multiparty computation, and the recently proposed distributed optimization based subspace perturbation algorithms. However, since each of them is derived from a different context and has different metrics and assumptions, it is hard to choose or design an appropriate algorithm in the context of distributed processing. In order to address this problem, we first propose general mutual information based information-theoretical metrics that are able to compare and relate these existing algorithms in terms of two key aspects: output utility and individual privacy. We consider two widely-used adversary models, the passive and eavesdropping adversary. Moreover, we derive a lower bound on individual privacy which helps to understand the nature of the problem and provides insights on which algorithm is preferred given different conditions. To validate the above claims, we investigate a concrete example and compare a number of state-of-the-art approaches in terms of the concerned aspects using not only theoretical analysis but also numerical validation. Finally, we discuss and provide principles for designing appropriate algorithms for different applications.

Files

IFStrans2020.pdf
(pdf | 0.757 Mb)
License info not available