CaltechAUTHORS
  A Caltech Library Service

Sparse Recovery of Positive Signals with Minimal Expansion

Khajehnejad, M. Amin and Dimakis, Alexandros G. and Xu, Weiyu and Hassibi, Babak (2009) Sparse Recovery of Positive Signals with Minimal Expansion. . (Unpublished) http://resolver.caltech.edu/CaltechAUTHORS:20150130-072455533

[img] PDF - Submitted Version
See Usage Policy.

331Kb

Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20150130-072455533

Abstract

We investigate the sparse recovery problem of reconstructing a high-dimensional non-negative sparse vector from lower dimensional linear measurements. While much work has focused on dense measurement matrices, sparse measurement schemes are crucial in applications, such as DNA microarrays and sensor networks, where dense measurements are not practically feasible. One possible construction uses the adjacency matrices of expander graphs, which often leads to recovery algorithms much more efficient than ℓ_1 minimization. However, to date, constructions based on expanders have required very high expansion coefficients which can potentially make the construction of such graphs difficult and the size of the recoverable sets small. In this paper, we construct sparse measurement matrices for the recovery of non-negative vectors, using perturbations of the adjacency matrix of an expander graph with much smaller expansion coefficient. We present a necessary and sufficient condition for ℓ_1 optimization to successfully recover the unknown vector and obtain expressions for the recovery threshold. For certain classes of measurement matrices, this necessary and sufficient condition is further equivalent to the existence of a "unique" vector in the constraint set, which opens the door to alternative algorithms to ℓ_1 minimization. We further show that the minimal expansion we use is necessary for any graph for which sparse recovery is possible and that therefore our construction is tight. We finally present a novel recovery algorithm that exploits expansion and is much faster than ℓ_1 optimization. Finally, we demonstrate through theoretical bounds, as well as simulation, that our method is robust to noise and approximate sparsity.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/0902.4045arXivDiscussion Paper
Record Number:CaltechAUTHORS:20150130-072455533
Persistent URL:http://resolver.caltech.edu/CaltechAUTHORS:20150130-072455533
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:54234
Collection:CaltechAUTHORS
Deposited By: Shirley Slattery
Deposited On:30 Jan 2015 22:14
Last Modified:30 Jan 2015 22:14

Repository Staff Only: item control page