CaltechAUTHORS
  A Caltech Library Service

Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?

Candès, Emmanuel J. and Tao, Terence (2006) Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? IEEE Transactions on Information Theory, 52 (12). pp. 5406-5425. ISSN 0018-9448. doi:10.1109/TIT.2006.885507. https://resolver.caltech.edu/CaltechAUTHORS:CANieeetit06b

[img]
Preview
PDF
See Usage Policy.

459kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:CANieeetit06b

Abstract

Suppose we are given a vector f in a class F ⊂ ℝN, e.g., a class of digital signals or digital images. How many linear measurements do we need to make about f to be able to recover f to within precision ε in the Euclidean (ℓ2) metric? This paper shows that if the objects of interest are sparse in a fixed basis or compressible, then it is possible to reconstruct f to within very high accuracy from a small number of random measurements by solving a simple linear program. More precisely, suppose that the nth largest entry of the vector |f| (or of its coefficients in a fixed basis) obeys |f|(n) ≤ R n^(1-p), where R > 0 and p > 0. Suppose that we take measurements yk = {f,Xk}, k = 1, . . .,K, where the Xk are N-dimensional Gaussian vectors with independent standard normal entries. Then for each f obeying the decay estimate above for some 0 < p < 1 and with overwhelming probability, our reconstruction f#, defined as the solution to the constraints yk = 〈f#, Xk〉 with minimal ℓ1 norm, obeys [equation]. There is a sense in which this result is optimal; it is generally impossible to obtain a higher accuracy from any set of K measurements whatsoever. The methodology extends to various other random measurement ensembles; for example, we show that similar results hold if one observes a few randomly sampled Fourier coefficients of f. In fact, the results are quite general and require only two hypotheses on the measurement ensemble which are detailed.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.1109/TIT.2006.885507DOIUNSPECIFIED
ORCID:
AuthorORCID
Candès, Emmanuel J.0000-0001-9234-924X
Additional Information:© Copyright 2006 IEEE. Reprinted with permission. Manuscript received November 1, 2004; revised April 3, 2006. [Posted online: 2006-11-30] The work of E.J. Candes was supported in part by the National Science Foundation under Grants DMS 01-40698 (FRG) and ACI-0204932 (ITR), and by an Alfred P. Sloan Fellowship. The work of T. Tao was supported in part by a grant from the Packard Foundation. The material in this paper was presented at Multiscale Geometry and Analysis in High Dimensions, Los Angeles, CA, October 2004. E.J. Candes would like to thank Justin Romberg and Houman Ohwadi for conversations related to this project, and Noureddine El Karoui for sending us an early draft of his Ph.D. dissertation. E.J. Candes would also like to thank Joel Tropp for sharing some notes. T. Tao thanks Wilhelm Schlag and Gerd Mockenhaupt for pointing out the relevance of Bourgain’s work [1], [2]. Finally, they would like to thank the anonymous referees for providing useful references (see Section I-D) and for their role in improving the original manuscript.
Subject Keywords:Concentration of measure, convex optimization, duality in optimization, linear programming, random matrices, random projections, signal recovery, singular values of random matrices, sparsity, trigonometric expansions, uncertainty principle
Issue or Number:12
DOI:10.1109/TIT.2006.885507
Record Number:CaltechAUTHORS:CANieeetit06b
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:CANieeetit06b
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:6806
Collection:CaltechAUTHORS
Deposited By: Archive Administrator
Deposited On:23 Dec 2006
Last Modified:08 Nov 2021 20:37

Repository Staff Only: item control page