CaltechAUTHORS
  A Caltech Library Service

Physics-Informed Neural Operator for Learning Partial Differential Equations

Li, Zongyi and Zheng, Hongkai and Kovachki, Nikola and Jin, David and Chen, Haoxuan and Liu, Burigede and Azizzadenesheli, Kamyar and Anandkumar, Anima (2021) Physics-Informed Neural Operator for Learning Partial Differential Equations. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20220714-224647405

[img] PDF - Published Version
See Usage Policy.

5MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20220714-224647405

Abstract

Machine learning methods have recently shown promise in solving partial differential equations (PDEs). They can be classified into two broad categories: approximating the solution function and learning the solution operator. The Physics-Informed Neural Network (PINN) is an example of the former while the Fourier neural operator (FNO) is an example of the latter. Both these approaches have shortcomings. The optimization in PINN is challenging and prone to failure, especially on multi-scale dynamic systems. FNO does not suffer from this optimization issue since it carries out supervised learning on a given dataset, but obtaining such data may be too expensive or infeasible. In this work, we propose the physics-informed neural operator (PINO), where we combine the operating-learning and function-optimization frameworks. This integrated approach improves convergence rates and accuracy over both PINN and FNO models. In the operator-learning phase, PINO learns the solution operator over multiple instances of the parametric PDE family. In the test-time optimization phase, PINO optimizes the pre-trained operator ansatz for the querying instance of the PDE. Experiments show PINO outperforms previous ML methods on many popular PDE families while retaining the extraordinary speed-up of FNO compared to solvers. In particular, PINO accurately solves challenging long temporal transient flows and Kolmogorov flows where other baseline ML methods fail to converge.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
https://doi.org/10.48550/arXiv.2111.03794arXivDiscussion Paper
ORCID:
AuthorORCID
Li, Zongyi0000-0003-2081-9665
Kovachki, Nikola0000-0002-3650-2972
Chen, Haoxuan0000-0001-8398-9093
Liu, Burigede0000-0002-6518-3368
Azizzadenesheli, Kamyar0000-0001-8507-1868
Anandkumar, Anima0000-0002-6974-6797
Additional Information:Z. Li gratefully acknowledges the financial support from the Kortschak Scholars Program. N. Kovachki is partially supported by the Amazon AI4Science Fellowship. A. Anandkumar is supported in part by Bren endowed chair, Microsoft, Google, Adobe faculty fellowships, and DE Logi grant. The authors want to thank Sifan Wang for meaningful discussions.
Funders:
Funding AgencyGrant Number
Kortschak Scholars ProgramUNSPECIFIED
Amazon AI4Science FellowshipUNSPECIFIED
Bren Professor of Computing and Mathematical SciencesUNSPECIFIED
Microsoft Faculty FellowshipUNSPECIFIED
Google Faculty Research AwardUNSPECIFIED
AdobeUNSPECIFIED
Caltech De Logi FundUNSPECIFIED
Record Number:CaltechAUTHORS:20220714-224647405
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20220714-224647405
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:115605
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:15 Jul 2022 23:16
Last Modified:15 Jul 2022 23:16

Repository Staff Only: item control page