CaltechAUTHORS
  A Caltech Library Service

Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions

Halko, N. and Martinsson, P. G. and Tropp, J. A. (2011) Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions. SIAM Review, 53 (2). pp. 217-288. ISSN 0036-1445. https://resolver.caltech.edu/CaltechAUTHORS:20111025-085943917

[img]
Preview
PDF - Published Version
See Usage Policy.

1311Kb
[img] PDF - Submitted Version
See Usage Policy.

1257Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20111025-085943917

Abstract

Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and/or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition of an m × n matrix. (i) For a dense input matrix, randomized algorithms require O(mn log(k)) floating-point operations (flops) in contrast to O(mnk) for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to O(k) passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data.


Item Type:Article
Related URLs:
URLURL TypeDescription
http://dx.doi.org/10.1137/090771806DOIArticle
http://epubs.siam.org/sirev/resource/1/siread/v53/i2/p217_s1PublisherArticle
https://arxiv.org/abs/0909.4061arXivDiscussion Paper
ORCID:
AuthorORCID
Tropp, J. A.0000-0003-1024-1791
Additional Information:© 2011 Society for Industrial and Applied Mathematics. Received by the editors September 21, 2009; accepted for publication (in revised form) December 2, 2010; published electronically May 5, 2011. The authors have benefited from valuable discussions with many researchers, among them Inderjit Dhillon, Petros Drineas, Ming Gu, Edo Liberty, Michael Mahoney, Vladimir Rokhlin, Yoel Shkolnisky, and Arthur Szlam. In particular, we would like to thank Mark Tygert for his insightful remarks on early drafts of this paper. The example in section 7.2 was provided by Fran¸cois Meyer of the University of Colorado at Boulder. The example in section 7.3 comes from the FERET database of facial images collected under the FERET program, sponsored by the DoD Counterdrug Technology Development Program Office. The work reported was initiated during the program Mathematics of Knowledge and Search Engines held at IPAM in the fall of 2007. Finally, we would like to thank the anonymous referees, whose thoughtful remarks have helped us to improve the manuscript dramatically.
Subject Keywords:dimension reduction; eigenvalue decomposition; interpolative decomposition; Johnson– Lindenstrauss lemma; matrix approximation; parallel algorithm; pass-efficient algorithm; principal component analysis; randomized algorithm; random matrix; rank-revealing QR factorization; singular value decomposition; streaming algorithm
Issue or Number:2
Classification Code:AMS subject classifications: Primary, 65F30; Secondary, 68W20, 60B20
Record Number:CaltechAUTHORS:20111025-085943917
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20111025-085943917
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:27399
Collection:CaltechAUTHORS
Deposited By: Jason Perez
Deposited On:25 Oct 2011 20:10
Last Modified:03 Oct 2019 03:23

Repository Staff Only: item control page