CaltechAUTHORS
  A Caltech Library Service

Provable Methods for Training Neural Networks with Sparse Connectivity

Sedghi, Hanie and Anandkumar, Anima (2014) Provable Methods for Training Neural Networks with Sparse Connectivity. In: Deep Learning and Representation Learning Workshop: NIPS 2014, 12 December 2014, Montreal, Canada. https://resolver.caltech.edu/CaltechAUTHORS:20190402-163306528

[img] PDF - Published Version
See Usage Policy.

252kB
[img] PDF - Submitted Version
See Usage Policy.

152kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20190402-163306528

Abstract

We provide novel guaranteed approaches for training feedforward neural networks with sparse connectivity. We leverage on the techniques developed previously for learning linear networks and show that they can also be effectively adopted to learn non-linear networks. We operate on the moments involving label and the score function of the input, and show that their factorization provably yields the weight matrix of the first layer of a deep network under mild conditions. In practice, the output of our method can be employed as effective initializers for gradient descent.


Item Type:Conference or Workshop Item (Paper)
Related URLs:
URLURL TypeDescription
https://sites.google.com/site/deeplearningworkshopnips2014/66.pdfarXivArticle
http://arxiv.org/abs/1412.2693arXivDiscussion Paper
Subject Keywords:Deep feedforward networks, sparse connectivity, ℓ_1-optimization, Stein’s lemma
DOI:10.48550/arXiv.1412.2693
Record Number:CaltechAUTHORS:20190402-163306528
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20190402-163306528
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:94389
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:03 Apr 2019 14:45
Last Modified:02 Jun 2023 00:21

Repository Staff Only: item control page