Neural Operator: Learning Maps Between Function Spaces
Abstract
The classical development of neural networks has primarily focused on learning mappings between finite dimensional Euclidean spaces or finite sets. We propose a generalization of neural networks tailored to learn operators mapping between infinite dimensional function spaces. We formulate the approximation of operators by composition of a class of linear integral operators and nonlinear activation functions, so that the composed operator can approximate complex nonlinear operators. Furthermore, we introduce four classes of operator parameterizations: graph-based operators, low-rank operators, multipole graph-based operators, and Fourier operators and describe efficient algorithms for computing with each one. The proposed neural operators are resolution-invariant: they share the same network parameters between different discretizations of the underlying function spaces and can be used for zero-shot super-resolutions. Numerically, the proposed models show superior performance compared to existing machine learning based methodologies on Burgers' equation, Darcy flow, and the Navier-Stokes equation, while being several order of magnitude faster compared to conventional PDE solvers.
Additional Information
Z. Li gratefully acknowledges the financial support from the Kortschak Scholars Program. A. Anandkumar is supported in part by Bren endowed chair, LwLL grants, Beyond Limits, Raytheon, Microsoft, Google, Adobe faculty fellowships, and DE Logi grant. K. Bhattacharya, N. B. Kovachki, B. Liu and A. M. Stuart gratefully acknowledge the financial support of the Army Research Laboratory through the Cooperative Agreement Number W911NF-12-0022. Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-12-2-0022. AMS is also supported by NSF (award DMS-1818977). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. The computations presented here were conducted on the Caltech High Performance Cluster, partially supported by a grant from the Gordon and Betty Moore Foundation.Attached Files
Submitted - 2108.08481.pdf
Files
Name | Size | Download all |
---|---|---|
md5:a3544f9e5365dd78249efedbf4a9344f
|
4.6 MB | Preview Download |
Additional details
- Eprint ID
- 110666
- Resolver ID
- CaltechAUTHORS:20210831-204010794
- Kortschak Scholars Program
- Bren Professor of Computing and Mathematical Sciences
- Learning with Less Labels (LwLL)
- Beyond Limits
- Raytheon Company
- Microsoft Faculty Fellowship
- Google Faculty Research Award
- Adobe
- Caltech De Logi Fund
- Army Research Laboratory
- W911NF-12-0022
- NSF
- DMS-1818977
- Gordon and Betty Moore Foundation
- Created
-
2021-09-01Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field
- Caltech groups
- Center for Autonomous Systems and Technologies (CAST)