CaltechAUTHORS
  A Caltech Library Service

Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization

Azizan, Navid and Hassibi, Babak (2018) Stochastic Gradient/Mirror Descent: Minimax Optimality and Implicit Regularization. . (Unpublished) http://resolver.caltech.edu/CaltechAUTHORS:20190402-101505900

[img] PDF - Submitted Version
See Usage Policy.

465Kb

Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20190402-101505900

Abstract

Stochastic descent methods (of the gradient and mirror varieties) have become increasingly popular in optimization. In fact, it is now widely recognized that the success of deep learning is not only due to the special deep architecture of the models, but also due to the behavior of the stochastic descent methods used, which play a key role in reaching "good" solutions that generalize well to unseen data. In an attempt to shed some light on why this is the case, we revisit some minimax properties of stochastic gradient descent (SGD) for the square loss of linear models---originally developed in the 1990's---and extend them to general stochastic mirror descent (SMD) algorithms for general loss functions and nonlinear models. In particular, we show that there is a fundamental identity which holds for SMD (and SGD) under very general conditions, and which implies the minimax optimality of SMD (and SGD) for sufficiently small step size, and for a general class of loss functions and general nonlinear models. We further show that this identity can be used to naturally establish other properties of SMD (and SGD), namely convergence and implicit regularization for over-parameterized linear models (in what is now being called the "interpolating regime"), some of which have been shown in certain cases in prior literature. We also argue how this identity can be used in the so-called "highly over-parameterized" nonlinear setting (where the number of parameters far exceeds the number of data points) to provide insights into why SMD (and SGD) may have similar convergence and implicit regularization properties for deep learning.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/1806.00952arXivDiscussion Paper
ORCID:
AuthorORCID
Azizan, Navid0000-0002-4299-2963
Record Number:CaltechAUTHORS:20190402-101505900
Persistent URL:http://resolver.caltech.edu/CaltechAUTHORS:20190402-101505900
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:94358
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:02 Apr 2019 17:26
Last Modified:02 Apr 2019 17:26

Repository Staff Only: item control page