CaltechAUTHORS
  A Caltech Library Service

Deep regularization and direct training of the inner layers of Neural Networks with Kernel Flows

Yoo, Gene Ryan and Owhadi, Houman (2021) Deep regularization and direct training of the inner layers of Neural Networks with Kernel Flows. Physica D: Nonlinear Phenomena, 426 . Art. No. 132952. ISSN 0167-2789. doi:10.1016/j.physd.2021.132952. https://resolver.caltech.edu/CaltechAUTHORS:20201110-075343797

[img] PDF - Accepted Version
See Usage Policy.

1MB
[img] PDF - Submitted Version
See Usage Policy.

1MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20201110-075343797

Abstract

We introduce a new regularization method for Artificial Neural Networks (ANNs) based on the Kernel Flow (KF) algorithm. The algorithm was introduced in Owhadi and Yoo (2019) as a method for kernel selection in regression/kriging based on the minimization of the loss of accuracy incurred by halving the number of interpolation points in random batches of the dataset. Writing f_θ(x) = (f^((n))_(θn)∘f^((n−1))_(θn−1)∘⋯∘f^((1))_(θ₁))(x) for the functional representation of compositional structure of the ANN (where θ_i are the weights and biases of the layer i), the inner layers outputs h^((i))(x) = (f^((i))_(θi)∘f^((i−1))_(θi−1)∘⋯∘f^((1))_(θ1))(x) define a hierarchy of feature maps and a hierarchy of kernels k^((i))(x,x′) = exp(−γ_i∥h^((i))(x)−h^((i))(x′)∥²₂). When combined with a batch of the dataset, these kernels produce KF losses e(i)₂ (defined as the L² regression error incurred by using a random half of the batch to predict the other half) depending on the parameters of the inner layers θ₁,…,θ_i (and γ_i). The proposed method simply consists of aggregating (as a weighted sum) a subset of these KF losses with a classical output loss (e.g., cross-entropy). We test the proposed method on Convolutional Neural Networks (CNNs) and Wide Residual Networks (WRNs) without alteration of their structure nor their output classifier and report reduced test errors, decreased generalization gaps, and increased robustness to distribution shift without a significant increase in computational complexity relative to standard CNN and WRN training (with Drop Out and Batch Normalization). We suspect that these results might be explained by the fact that while conventional training only employs a linear functional (a generalized moment) of the empirical distribution defined by the dataset and can be prone to trapping in the Neural Tangent Kernel regime (under over-parameterizations), the proposed loss function (defined as a nonlinear functional of the empirical distribution) effectively trains the underlying kernel defined by the CNN beyond regressing the data with that kernel.


Item Type:Article
Related URLs:
URLURL TypeDescription
https://doi.org/10.1016/j.physd.2021.132952DOIArticle
https://arxiv.org/abs/2002.08335arXivDiscussion Paper
ORCID:
AuthorORCID
Yoo, Gene Ryan0000-0002-5319-5599
Owhadi, Houman0000-0002-5677-1600
Additional Information:© 2021 Published by Elsevier B.V. Received 31 December 2020, Revised 17 March 2021, Accepted 13 April 2021, Available online 18 July 2021. The authors gratefully acknowledge support by the Air Force Office of Scientific Research, USA under award number FA9550-18-1-0271 (Games for Computation and Learning), Beyond Limits, USA (Learning Optimal Models), and NASA/JPL, USA (Earth 2050). We also thank an anonymous referee for comments and suggestions. CRediT authorship contribution statement: Gene Ryan Yoo: Conceptualization, Methodology, Numerical experiments, Writing – original draft. Houman Owhadi: Conceptualization, Writing – review & editing. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Funders:
Funding AgencyGrant Number
Air Force Office of Scientific Research (AFOSR)FA9550-18-1-0271
NASA/JPLEarth 2050
Subject Keywords:Kernel Flows; Gaussian process regression; Artificial neural networks; Machine learning; Image classification; Inner layer training
DOI:10.1016/j.physd.2021.132952
Record Number:CaltechAUTHORS:20201110-075343797
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20201110-075343797
Official Citation:Gene Ryan Yoo, Houman Owhadi, Deep regularization and direct training of the inner layers of Neural Networks with Kernel Flows, Physica D: Nonlinear Phenomena, Volume 426, 2021, 132952, ISSN 0167-2789, https://doi.org/10.1016/j.physd.2021.132952.
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:106580
Collection:CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On:10 Nov 2020 16:16
Last Modified:13 Aug 2021 21:22

Repository Staff Only: item control page