A Caltech Library Service

Kernel Flows: From learning kernels from data into the abyss

Owhadi, Houman and Yoo, Gene Ryan (2019) Kernel Flows: From learning kernels from data into the abyss. Journal of Computational Physics, 389 . pp. 22-47. ISSN 0021-9991.

[img] PDF - Accepted Version
See Usage Policy.

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


Learning can be seen as approximating an unknown function by interpolating the training data. Although Kriging offers a solution to this problem, it requires the prior specification of a kernel and it is not scalable to large datasets. We explore a numerical approximation approach to kernel selection/construction based on the simple premise that a kernel must be good if the number of interpolation points can be halved without significant loss in accuracy (measured using the intrinsic RKHS norm ∥·∥ associated with the kernel). We first test and motivate this idea on a simple problem of recovering the Green's function of an elliptic PDE (with inhomogeneous coefficients) from the sparse observation of one of its solutions. Next we consider the problem of learning non-parametric families of deep kernels of the form K_1(F_n(x), F_n(x')) with F_(n+1) = (I_d + ϵG_(n+1)) ◦ F_n and G_(n+1) ∈ span{K_1(F_n(x_i), ·)}. With the proposed approach constructing the kernel becomes equivalent to integrating a stochastic data driven dynamical system, which allows for the training of very deep (bottomless) networks and the exploration of their properties. These networks learn by constructing flow maps in the kernel and input spaces via incremental data-dependent deformations/perturbations (appearing as the cooperative counterpart of adversarial examples) and, at profound depths, they (1) can achieve accurate classification from only one data point per class (2) appear to learn archetypes of each class (3) expand distances between points that are in different classes and contract distances between points in the same class. For kernels parameterized by the weights of Convolutional Neural Networks, minimizing approximation errors incurred by halving random subsets of interpolation points, appears to outperform training (the same CNN architecture) with relative entropy and dropout.

Item Type:Article
Related URLs:
URLURL TypeDescription Paper
Owhadi, Houman0000-0002-5677-1600
Additional Information:© 2019 Published by Elsevier. Received 28 September 2018, Revised 17 March 2019, Accepted 20 March 2019, Available online 28 March 2019. The authors gratefully acknowledges this work supported by the Air Force Office of Scientific Research and the DARPA EQUiPS Program under award number FA9550-16-1-0054 (Computational Information Games) and the Air Force Office of Scientific Research under award number FA9550-18-1-0271 (Games for Computation and Learning). We also thank Andrew Stuart and Yifan Chen for helpful discussions for the clarification of Section 7.4.
Funding AgencyGrant Number
Air Force Office of Scientific Research (AFOSR)FA9550-16-1-0054
Defense Advanced Research Projects Agency (DARPA)UNSPECIFIED
Air Force Office of Scientific Research (AFOSR)FA9550-18-1-0271
Record Number:CaltechAUTHORS:20190328-180953225
Persistent URL:
Official Citation:Houman Owhadi, Gene Ryan Yoo, Kernel Flows: From learning kernels from data into the abyss, Journal of Computational Physics, Volume 389, 2019, Pages 22-47, ISSN 0021-9991, (
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:94269
Deposited By: George Porter
Deposited On:29 Mar 2019 16:39
Last Modified:03 Oct 2019 21:02

Repository Staff Only: item control page