A Caltech Library Service

The Squared-Error of Generalized LASSO: A Precise Analysis

Oymak, Samet and Thrampoulidis, Christos and Hassibi, Babak (2013) The Squared-Error of Generalized LASSO: A Precise Analysis. . (Unpublished)

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


We consider the problem of estimating an unknown signal x_0 from noisy linear observations y = Ax_0 + z ∈ ℝ^m. In many practical instances, x_0 has a certain structure that can be captured by a structure inducing convex function f(•). For example, ℓ_1 norm can be used to encourage a sparse solution. To estimate x_0 with the aid of f(•), we consider the well-known LASSO method and provide sharp characterization of its performance. Our study falls under a generic framework, where the entries of the measurement matrix A and the noise vector z have zero-mean normal distributions with variances 1 and σ^2, respectively. For the LASSO estimator x^*, we ask: "What is the precise estimation error as a function of the noise level σ, the number of observations m and the structure of the signal?". In particular, we attempt to calculate the Normalized Square Error (NSE) defined as (∥x^* - x_0∥)^2_2)/σ^2. We show that, the structure of the signal x_0 and choice of the function f(•) enter the error formulae through the summary parameters D_f(x_0, ℝ^+) and D_f(x_0,λ), which are defined as the "Gaussian squared-distances" to the subdifferential cone and to the λ-scaled subdifferential of f at x_0, respectively. The first estimator assumes a-priori knowledge of f(x_0) and is given by arg min_x {∥y-Ax∥_2 subject to f(x) ≤ f(x_0)}. We prove that its worse case NSE is achieved when σ → 0 and concentrates around (D_f(x_0,ℝ^+))/(m-D_f(x_0,ℝ^+). Secondly, we consider arg min_x {∥y-Ax∥_2 + λf(x)}, for some penalty parameter λ ≥ 0. This time, the NSE formula depends on the choice of λ and is given by (D_f(x_0,λ)/(m-D_f(x_0,λ) over a range of λ. The last estimator is arg min_x {1/2∥y-Ax∥^2_2 + στf(x)}. We establish a mapping between this and the second estimator and propose a formula for its NSE. As useful side results, we find explicit formulae for the optimal estimation performance and the optimal penalty parameters λ_(best) and τ_best). Finally, for a number of important structured signal classes, we translate our abstract formulae to closed-form upper bounds on the NSE.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Thrampoulidis, Christos0000-0001-9053-9365
Additional Information:This work was supported in part by the National Science Foundation under grants CCF-0729203, CNS-0932428 and CIF-1018927, by the Office of Naval Research under the MURI grant N00014-08-1-0747, and by a grant from Qualcomm Inc
Funding AgencyGrant Number
Office of Naval Research (ONR)N00014-08-1-0747
Subject Keywords:convex optimization, generalized LASSO, structured sparsity, Gaussian processes, statistical estimation, duality, model fitting, linear inverse, first order approximation, noisy compressed sensing, random noise
Record Number:CaltechAUTHORS:20150121-073901094
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:53913
Deposited By: Shirley Slattery
Deposited On:22 Jan 2015 00:41
Last Modified:09 Mar 2020 13:19

Repository Staff Only: item control page