CaltechAUTHORS
  A Caltech Library Service

Learning Semidefinite-Representable Regularizers

Soh, Yong Sheng and Chandrasekaran, Venkat (2019) Learning Semidefinite-Representable Regularizers. Foundations of Computational Mathematics, 19 (2). pp. 375-434. ISSN 1615-3375. doi:10.1007/s10208-018-9386-z. https://resolver.caltech.edu/CaltechAUTHORS:20170614-100357188

[img] PDF - Submitted Version
See Usage Policy.

1MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20170614-100357188

Abstract

Regularization techniques are widely employed in optimization-based approaches for solving ill-posed inverse problems in data analysis and scientific computing. These methods are based on augmenting the objective with a penalty function, which is specified based on prior domain-specific expertise to induce a desired structure in the solution. We consider the problem of learning suitable regularization functions from data in settings in which precise domain knowledge is not directly available. Previous work under the title of ‘dictionary learning’ or ‘sparse coding’ may be viewed as learning a regularization function that can be computed via linear programming. We describe generalizations of these methods to learn regularizers that can be computed and optimized via semidefinite programming. Our framework for learning such semidefinite regularizers is based on obtaining structured factorizations of data matrices, and our algorithmic approach for computing these factorizations combines recent techniques for rank minimization problems along with an operator analog of Sinkhorn scaling. Under suitable conditions on the input data, our algorithm provides a locally linearly convergent method for identifying the correct regularizer that promotes the type of structure contained in the data. Our analysis is based on the stability properties of Operator Sinkhorn scaling and their relation to geometric aspects of determinantal varieties (in particular tangent spaces with respect to these varieties). The regularizers obtained using our framework can be employed effectively in semidefinite programming relaxations for solving inverse problems.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.1007/s10208-018-9386-zDOIArticle
http://rdcu.be/J9bePublisherFree ReadCube access
https://arxiv.org/abs/1701.01207arXivDiscussion Paper
ORCID:
AuthorORCID
Soh, Yong Sheng0000-0003-3367-1401
Alternate Title:A Matrix Factorization Approach for Learning Semidefinite-Representable Regularizers
Additional Information:© 2018 SFoCM. Received: 5 January 2017; Revised: 9 January 2018; Accepted: 11 January 2018. The authors were supported in part by NSF Career award CCF-1350590, by Air Force Office of Scientific Research Grants FA9550-14-1-0098 and FA9550-16-1-0210, by a Sloan research fellowship, and an A*STAR (Agency for Science, Technology, and Research, Singapore) fellowship. The authors thank Joel Tropp for a helpful remark that improved the result in Proposition 17.
Funders:
Funding AgencyGrant Number
NSFCCF-1350590
Air Force Office of Scientific Research (AFOSR)FA9550-14-1-0098
Air Force Office of Scientific Research (AFOSR)FA9550-16-1-0210
Alfred P. Sloan FoundationUNSPECIFIED
Agency for Science, Technology and Research (A*STAR)UNSPECIFIED
Subject Keywords:Atomic norm; Convex optimization; Low-rank matrices; Nuclear norm; Operator scaling; Representation learning
Issue or Number:2
Classification Code:MSC: Primary 90C25; Secondary 90C22; 15A24; 41A45; 52A41
DOI:10.1007/s10208-018-9386-z
Record Number:CaltechAUTHORS:20170614-100357188
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20170614-100357188
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:78200
Collection:CaltechAUTHORS
Deposited By: Ruth Sustaita
Deposited On:14 Jun 2017 17:11
Last Modified:15 Nov 2021 17:37

Repository Staff Only: item control page