Distributed Optimisation Using Stochastic PDMM

Convergence, transmission losses and privacy

More Info
expand_more

Abstract

In recent years, the large increase in connected devices and the data that is collected by these devices has caused a heightened interest in distributed processing. Many practical distributed networks are of heterogeneous nature. Because of this, algorithms operating within these networks need to be simple, robust against network dynamics and resource efficient. Additionally, if privacy preservation methods are properly implemented, they add to the power of distributed processing by making it possible to leverage the data of many different users, without infringing the privacy of the individuals involved.
In this study we focus on the primal-dual method of multipliers (PDMM), which is a promising dis- tributed optimisation algorithm that seems to be suitable for distributed optimisation in heterogeneous networks. Most theoretical work that can be found in existing literature focuses on synchronous ver- sions of PDMM. However, in heterogeneous networks, asynchronous algorithms are favourable over synchronous algorithms. So far, simulation results have indicated that asynchronous PDMM converges and can even converge in the presence of transmission losses.
In this work we analyse the properties of stochastic PDMM, which is a general framework that can model variations of PDMM such as asynchronous PDMM and PDMM with transmission losses. We build upon previous empirical results of PDMM and formulate theoretical proofs to substantiate these results. After defining stochastic PDMM and proving its convergence, we compare a number of PDMM variations that have been mentioned throughout the literature. Lastly, we derive a lower bound for the variance of the auxiliary variable in the context of stochastic PDMM, assuming uniform updating probabilities. This lower bound indicates that subspace based privacy preservation is applicable to certain instances of stochastic PDMM, like asynchronous PDMM.
The main result of this work is a theoretical proof that shows that stochastic PDMM converges almost surely if the updating probabilities of each auxiliary variable are nonzero. Two important conclusions that follow from this proof are the almost sure convergence of asynchronous PDMM and unicast PDMM with transmission losses. Another useful result is the fact that subspace based privacy preservation is effective when using asynchronous PDMM.