Consensus Based Distributed Sparse Bayesian Learning By Fast Marginal Likelihood Maximization
C. Manss (Deutsches Zentrum für Luft- und Raumfahrt (DLR))
Dmitriy Shutin (Deutsches Zentrum für Luft- und Raumfahrt (DLR))
G. Leus (TU Delft - Signal Processing Systems)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
For swarm systems, distributed processing is of paramount importance, and Bayesian methods are preferred for their robustness. Existing distributed sparse Bayesian learn- ing (SBL) methods rely on the automatic relevance deter- mination (ARD), which involves a computationally complex reweighted l1-norm optimization, or they use loopy belief propagation, which is not guaranteed to converge. Hence, this paper looks into the fast marginal likelihood maximiza- tion (FMLM) method to develop a faster distributed SBL version. The proposed method has a low communication overhead, and can be distributed by simple consensus meth- ods. The performed simulations indicate a better performance compared with the distributed ARD version, yet the same per- formance as the FMLM.