CaltechAUTHORS
  A Caltech Library Service

Sharp MSE Bounds for Proximal Denoising

Oymak, Samet and Hassibi, Babak (2016) Sharp MSE Bounds for Proximal Denoising. Foundations of Computational Mathematics, 16 (4). pp. 965-1029. ISSN 1615-3375. https://resolver.caltech.edu/CaltechAUTHORS:20160825-102038882

[img] PDF - Submitted Version
See Usage Policy.

1188Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20160825-102038882

Abstract

Denoising has to do with estimating a signal x_0 from its noisy observations y = x_0 + z. In this paper, we focus on the “structured denoising problem,” where the signal x_0 possesses a certain structure and z has independent normally distributed entries with mean zero and variance σ^2. We employ a structure-inducing convex function f(⋅) and solve min_x {1/2 ∥y−x∥^2_2 +σλf(x)}to estimate x_0, for some λ>0. Common choices for f(⋅) include the ℓ_1 norm for sparse vectors, the ℓ_1 −ℓ_2 norm for block-sparse signals and the nuclear norm for low-rank matrices. The metric we use to evaluate the performance of an estimate x∗ is the normalized mean-squared error NMSE(σ)=E∥x∗ − x_0∥^2_2/σ^2. We show that NMSE is maximized as σ→0 and we find the exact worst-case NMSE, which has a simple geometric interpretation: the mean-squared distance of a standard normal vector to the λ-scaled subdifferential λ∂f(x_0). When λ is optimally tuned to minimize the worst-case NMSE, our results can be related to the constrained denoising problem min_(f(x)≤f(x_0)){∥y−x∥2}. The paper also connects these results to the generalized LASSO problem, in which one solves min_(f(x)≤f(x_0)){∥y−Ax∥2} to estimate x_0 from noisy linear observations y=Ax_0 + z. We show that certain properties of the LASSO problem are closely related to the denoising problem. In particular, we characterize the normalized LASSO cost and show that it exhibits a “phase transition” as a function of number of observations. We also provide an order-optimal bound for the LASSO error in terms of the mean-squared distance. Our results are significant in two ways. First, we find a simple formula for the performance of a general convex estimator. Secondly, we establish a connection between the denoising and linear inverse problems.


Item Type:Article
Related URLs:
URLURL TypeDescription
http://dx.doi.org/10.1007/s10208-015-9278-4DOIArticle
http://link.springer.com/article/10.1007%2Fs10208-015-9278-4PublisherArticle
https://arxiv.org/abs/1305.2714arXivDiscussion Paper
http://rdcu.be/ts5kPublisherFree ReadCube access
Additional Information:© 2015 SFoCM. Received 13 November 2013; Revised 30 March 2015; Accepted 15 April 2015; First Online: 06 October 2015. This work was supported in part by the National Science Foundation under grants CCF-0729203, CNS-0932428 and CIF-1018927, by the Office of Naval Research under the MURI grant N00014-08-1-0747, and by a grant from Qualcomm Inc.
Funders:
Funding AgencyGrant Number
NSFCCF-0729203
NSFCNS-0932428
NSFCIF-1018927
Office of Naval Research (ONR)N00014-08-1-0747
Qualcomm Inc.UNSPECIFIED
Subject Keywords:Convex optimization; Proximity operator; Structured sparsity; Statistical estimation; Model fitting; Stochastic noise; Linear inverse; Generalized LASSO
Issue or Number:4
Classification Code:Mathematics Subject Classification: 90C25; 52A41; 90C59; 60G15
Record Number:CaltechAUTHORS:20160825-102038882
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20160825-102038882
Official Citation:Oymak, S. & Hassibi, B. Found Comput Math (2016) 16: 965. doi:10.1007/s10208-015-9278-4
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:69930
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:25 Aug 2016 19:34
Last Modified:03 Oct 2019 10:26

Repository Staff Only: item control page