CaltechAUTHORS
  A Caltech Library Service

A sparse decomposition of low rank symmetric positive semi-definite matrices

Hou, Thomas Y. and Li, Qin and Zhang, Pengchuan (2017) A sparse decomposition of low rank symmetric positive semi-definite matrices. Multiscale Modeling and Simulation, 15 (1). pp. 410-444. ISSN 1540-3459. https://resolver.caltech.edu/CaltechAUTHORS:20170413-141136299

[img] PDF - Published Version
See Usage Policy.

2222Kb
[img] PDF - Submitted Version
See Usage Policy.

2595Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20170413-141136299

Abstract

Suppose that A∈R^(N×N) is symmetric positive semidefinite with rank K ≤ N. Our goal is to decompose A into K rank-one matrices ∑^K_k=1gkg^T_k where the modes {gk}^K_(k=1) are required to be as sparse as possible. In contrast to eigendecomposition, these sparse modes are not required to be orthogonal. Such a problem arises in random field parametrization where A is the covariance function and is intractable to solve in general. In this paper, we partition the indices from 1 to N into several patches and propose to quantify the sparseness of a vector by the number of patches on which it is nonzero, which is called patchwise sparseness. Our aim is to find the decomposition which minimizes the total patchwise sparseness of the decomposed modes. We propose a domain-decomposition type method, called intrinsic sparse mode decomposition (ISMD), which follows the “local-modes-construction + patching-up" procedure. The key step in the ISMD is to construct local pieces of the intrinsic sparse modes by a joint diagonalization problem. Thereafter, a pivoted Cholesky decomposition is utilized to glue these local pieces together. Optimal sparse decomposition, consistency with different domain decomposition, and robustness to small perturbation are proved under the so-called regular-sparse assumption (see Definition 1.2). We provide simulation results to show the efficiency and robustness of the ISMD. We also compare the ISMD to other existing methods, e.g., eigendecomposition, pivoted Cholesky decomposition, and convex relaxation of sparse principal component analysis [R. Lai, J. Lu, and S. Osher, Comm. Math. Sci., to appear; V. Q. Vu, J. Cho, J. Lei, and K. Rohe, Fantope projection and selection: A near-optimal convex relaxation of sparse PCA, in Proceedings in Advances in Neural Information Processing Systems 26, 2013, pp. 2670--2678].


Item Type:Article
Related URLs:
URLURL TypeDescription
http://dx.doi.org/10.1137/16M107760XDOIArticle
http://epubs.siam.org/doi/10.1137/16M107760XPublisherArticle
https://arxiv.org/abs/1607.00702arXivDiscussion Paper
ORCID:
AuthorORCID
Zhang, Pengchuan0000-0003-1155-9507
Additional Information:© 2017 Society for Industrial and Applied Mathematics. Received by the editors May 31, 2016; accepted for publication (in revised form) November 22, 2016; published electronically March 16, 2017. This research was in part supported by Air Force MURI Grant FA9550-09-1-0613, DOE grant DE-FG02-06ER257, and NSF Grants No. DMS-1318377 and DMS-1159138.
Funders:
Funding AgencyGrant Number
Air Force Office of Scientific Research (AFOSR)FA9550-09-1-0613
Department of Energy (DOE)DE-FG02-06ER25727
NSFDMS-1318377
NSFDMS-1159138
Subject Keywords:intrinsic sparse mode decomposition, principal component analysis, sparse PCA, joint diagonalization, pivoted Cholesky decomposition, matrix factorization
Issue or Number:1
Classification Code:AMS subject classifications: 68Q25, 68R10, 68U05
Record Number:CaltechAUTHORS:20170413-141136299
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20170413-141136299
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:76554
Collection:CaltechAUTHORS
Deposited By: Ruth Sustaita
Deposited On:13 Apr 2017 22:04
Last Modified:03 Oct 2019 17:02

Repository Staff Only: item control page