A Caltech Library Service

Neural Operator: Learning Maps Between Function Spaces

Kovachki, Nikola and Li, Zongyi and Liu, Burigede and Azizzadenesheli, Kamyar and Bhattacharya, Kaushik and Stuart, Andrew and Anandkumar, Anima (2021) Neural Operator: Learning Maps Between Function Spaces. . (Unpublished)

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces. We formulate the approximation of operators by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators. Furthermore, we introduce four classes of operator parameterizations: graph-based operators, low-rank operators, multipole graph-based operators, and Fourier operators and describe efficient algorithms for computing with each one. The proposed neural operators are resolution-invariant: they share the same network parameters between different discretizations of the underlying function spaces and can be used for zero-shot super-resolutions. Numerically, the proposed models show superior performance compared to existing machine learning based methodologies on Burgers' equation, Darcy flow, and the Navier-Stokes equation, while being several order of magnitude faster compared to conventional PDE solvers.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Kovachki, Nikola0000-0002-3650-2972
Li, Zongyi0000-0003-2081-9665
Liu, Burigede0000-0002-6518-3368
Azizzadenesheli, Kamyar0000-0001-8507-1868
Bhattacharya, Kaushik0000-0003-2908-5469
Stuart, Andrew0000-0001-9091-7266
Additional Information:Z. Li gratefully acknowledges the financial support from the Kortschak Scholars Program. A. Anandkumar is supported in part by Bren endowed chair, LwLL grants, Beyond Limits, Raytheon, Microsoft, Google, Adobe faculty fellowships, and DE Logi grant. K. Bhattacharya, N. B. Kovachki, B. Liu and A. M. Stuart gratefully acknowledge the financial support of the Army Research Laboratory through the Cooperative Agreement Number W911NF-12-0022. Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-12-2-0022. AMS is also supported by NSF (award DMS-1818977). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. The computations presented here were conducted on the Caltech High Performance Cluster, partially supported by a grant from the Gordon and Betty Moore Foundation.
Funding AgencyGrant Number
Kortschak Scholars ProgramUNSPECIFIED
Bren Professor of Computing and Mathematical SciencesUNSPECIFIED
Learning with Less Labels (LwLL)UNSPECIFIED
Raytheon CompanyUNSPECIFIED
Microsoft Faculty FellowshipUNSPECIFIED
Google Faculty Research AwardUNSPECIFIED
Caltech De Logi FundUNSPECIFIED
Army Research LaboratoryW911NF-12-0022
Gordon and Betty Moore FoundationUNSPECIFIED
Subject Keywords:Deep Learning, Operator Inference, Partial Differential Equations, Navier-Stokes Equation
Record Number:CaltechAUTHORS:20210831-204010794
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:110666
Deposited By: George Porter
Deposited On:01 Sep 2021 14:38
Last Modified:01 Sep 2021 14:38

Repository Staff Only: item control page