A Caltech Library Service

Neural Operator: Graph Kernel Network for Partial Differential Equations

Li, Zongyi and Kovachki, Nikola and Azizzadenesheli, Kamyar and Liu, Burigede and Bhattacharya, Kaushik and Stuart, Andrew and Anandkumar, Anima (2020) Neural Operator: Graph Kernel Network for Partial Differential Equations. . (Unpublished)

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


The classical development of neural networks has been primarily for mappings between a finite-dimensional Euclidean space and a set of classes, or between two finite-dimensional Euclidean spaces. The purpose of this work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators). The key innovation in our work is that a single set of network parameters, within a carefully designed network architecture, may be used to describe mappings between infinite-dimensional spaces and between different finite-dimensional approximations of those spaces. We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators. The kernel integration is computed by message passing on graph networks. This approach has substantial practical consequences which we will illustrate in the context of mappings between input data to partial differential equations (PDEs) and their solutions. In this context, such learned networks can generalize among different approximation methods for the PDE (such as finite difference or finite element methods) and among approximations corresponding to different underlying levels of resolution and discretization. Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Kovachki, Nikola0000-0002-3650-2972
Azizzadenesheli, Kamyar0000-0001-8507-1868
Liu, Burigede0000-0002-6518-3368
Bhattacharya, Kaushik0000-0003-2908-5469
Additional Information:Z. Li gratefully acknowledges the financial support from the Kortschak Scholars Program. K. Azizzadenesheli is supported in part by Raytheon and Amazon Web Service. A. Anandkumar is supported in part by Bren endowed chair, DARPA PAIHR00111890035, LwLL grants, Raytheon, Microsoft, Google, Adobe faculty fellowships, and DE Logi grant. K. Bhattacharya, N. B. Kovachki, B. Liu and A. M. Stuart gratefully acknowledge the financial support of the Amy research Laboratory through the Cooperative Agreement Number W911NF-12-0022. Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-12-2-0022. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Funding AgencyGrant Number
Kortschak Scholars ProgramUNSPECIFIED
Amazon Web ServicesUNSPECIFIED
Bren Professor of Computing and Mathematical SciencesUNSPECIFIED
Defense Advanced Research Projects Agency (DARPA)PAIHR00111890035
Learning with Less Labels (LwLL)UNSPECIFIED
Microsoft Faculty FellowshipUNSPECIFIED
Google Faculty Research AwardUNSPECIFIED
Caltech De Logi FundUNSPECIFIED
Army Research LaboratoryW911NF-12-0022
Army Research LaboratoryW911NF-12-2-002
Record Number:CaltechAUTHORS:20200402-133318521
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:102271
Deposited By: Tony Diaz
Deposited On:02 Apr 2020 20:40
Last Modified:11 Nov 2020 00:54

Repository Staff Only: item control page