A novel one-layer recurrent neural network for the l1-regularized least square problem

Journal Article (2018)
Research Group
Information and Communication Technology
Copyright
© 2018 Majid Mohammadi, Y. Tan, Wout Hofman, S. Hamid Mousavi
DOI related publication
https://doi.org/10.1016/j.neucom.2018.07.007
More Info
expand_more
Publication Year
2018
Language
English
Copyright
© 2018 Majid Mohammadi, Y. Tan, Wout Hofman, S. Hamid Mousavi
Research Group
Information and Communication Technology
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The l1-regularized least square problem has been considered in diverse fields. However, finding its solution is exacting as its objective function is not differentiable. In this paper, we propose a new one-layer neural network to find the optimal solution of the l1-regularized least squares problem. To solve the problem, we first convert it into a smooth quadratic minimization by splitting the desired variable into its positive and negative parts. Accordingly, a novel neural network is proposed to solve the resulting problem, which is guaranteed to converge to the solution of the problem. Furthermore, the rate of the convergence is dependent on a scaling parameter, not to the size of datasets. The proposed neural network is further adjusted to encompass the total variation regularization. Extensive experiments on the l1 and total variation regularized problems illustrate the reasonable performance of the proposed neural network.

Files

1_s2.0_S0925231218308336_main.... (pdf)
(pdf | 2.98 Mb)
- Embargo expired in 10-01-2019
License info not available