CaltechAUTHORS
  A Caltech Library Service

High-Dimensional Covariance Decomposition into Sparse Markov and Independence Models

Janzamin, Majid and Anandkumar, Animashree (2014) High-Dimensional Covariance Decomposition into Sparse Markov and Independence Models. Journal of Machine Learning Research, 15 . pp. 1549-1591. ISSN 1533-7928. https://resolver.caltech.edu/CaltechAUTHORS:20170927-142820777

[img] PDF - Published Version
See Usage Policy.

1451Kb
[img] PDF - Submitted Version
See Usage Policy.

506Kb

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20170927-142820777

Abstract

Fitting high-dimensional data involves a delicate tradeoff between faithful representation and the use of sparse models. Too often, sparsity assumptions on the fitted model are too restrictive to provide a faithful representation of the observed data. In this paper, we present a novel framework incorporating sparsity in different domains. We decompose the observed covariance matrix into a sparse Gaussian Markov model (with a sparse precision matrix) and a sparse independence model (with a sparse covariance matrix). Our framework incorporates sparse covariance and sparse precision estimation as special cases and thus introduces a richer class of high-dimensional models. We posit the observed data as generated from a linear combination of a sparse Gaussian Markov model (with a sparse precision matrix) and a sparse Gaussian independence model (with a sparse covariance matrix). We characterize sufficient conditions for identifiability of the two models, viz., Markov and independence models. We propose an efficient decomposition method based on a modification of the popular ℓ_1-penalized maximum- likelihood estimator (ℓ_1-MLE). We establish that our estimator is consistent in both the domains, i.e., it successfully recovers the supports of both Markov and independence models, when the number of samples n scales as n=Ω(d^2log p), where p is the number of variables and d is the maximum node degree in the Markov model. Our experiments validate these results and also demonstrate that our models have better inference accuracy under simple algorithms such as loopy belief propagation.


Item Type:Article
Related URLs:
URLURL TypeDescription
http://www.jmlr.org/papers/v15/janzamin14a.htmlPublisherArticle
https://arxiv.org/abs/1211.0919arXivDiscussion Paper
Additional Information:© 2014 Majid Janzamin and Animashree Anandkumar. We thank Karthik Mohan for helpful discussions on running experiments. We also acknowledge useful discussions with Max Welling, Babak Hassibi and Martin Wainwright. We also thank Bin Yu and the JMLR reviewers for valuable comments that have significantly improved the manuscript. M. Janzamin is supported by NSF Award CCF-1219234 and ARO Award W911NF-12-1-0404. A. Anandkumar is supported in part by Microsoft Faculty Fellowship, NSF Career award CCF-1254106, NSF Award CCF-1219234, AFOSR Award FA9550-10-1-0310, and ARO Award W911NF-12-1-0404.
Funders:
Funding AgencyGrant Number
NSFCCF-1219234
Army Research Office (ARO)W911NF-12-1-0404
Microsoft ResearchUNSPECIFIED
NSFCCF-1254106
Air Force Office of Scientific Research (AFOSR)FA9550-10-1-0310
Subject Keywords:high-dimensional covariance estimation, sparse graphical model selection, sparse covariance models, sparsistency, convex optimization
Record Number:CaltechAUTHORS:20170927-142820777
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20170927-142820777
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:81883
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:27 Sep 2017 21:50
Last Modified:03 Oct 2019 18:48

Repository Staff Only: item control page