Learning Sparse Graphs under Smoothness Prior

Conference Paper (2017)
Author(s)

Sundeep Prabhakar Chepuri (TU Delft - Signal Processing Systems)

Sijia Liu (University of Michigan)

Geert Leus (TU Delft - Signal Processing Systems)

Alfred O. Hero (University of Michigan)

Research Group
Signal Processing Systems
DOI related publication
https://doi.org/10.1109/ICASSP.2017.7953410 Final published version
More Info
expand_more
Publication Year
2017
Language
English
Research Group
Signal Processing Systems
Article number
7953410
Pages (from-to)
6508-6512
ISBN (electronic)
978-1-5090-4117-6
Event
ICASSP 2017 (2017-03-05 - 2017-03-09), Hilton New Orleans Riverside, New Orleans, LA, United States
Downloads counter
166

Abstract

In this paper, we are interested in learning the underlying graph structure behind training data. Solving this basic problem is essential to carry out any graph signal processing or machine learning task. To realize this, we assume that the data is smooth with respect to the graph topology, and we parameterize the graph topology using an edge sampling function. That is, the graph Laplacian is expressed in terms of a sparse edge selection vector, which provides an explicit handle to control the sparsity level of the graph. We solve the sparse graph learning problem given some training data in both the noiseless and noisy settings. Given the true smooth data, the posed sparse graph learning problem can be solved optimally and is based on simple rank ordering. Given the noisy data, we show that the joint sparse graph learning and denoising problem can be simplified to designing only the sparse edge selection vector, which can be solved using convex optimization.