Model Reduction and Neural Networks for Parametric PDEs
Abstract
We develop a general framework for data-driven approximation of input-output maps between infinite-dimensional spaces. The proposed approach is motivated by the recent successes of neural networks and deep learning, in combination with ideas from model reduction. This combination results in a neural network approximation which, in principle, is defined on infinite-dimensional spaces and, in practice, is robust to the dimension of finite-dimensional approximations of these spaces required for computation. For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology. Numerically we demonstrate the effectiveness of the method on a class of parametric elliptic PDE problems, showing convergence and robustness of the approximation scheme with respect to the size of the discretization, and compare our method with existing algorithms from the literature.
Additional Information
Submitted to the editors May 8, 2020. The authors are grateful to Anima Anandkumar, Kamyar Azizzadenesheli, Zongyi Li and Nicholas H. Nelsen for helpful discussions in the general area of neural networks for PDE-defined maps between Hilbert spaces. The work is supported by MEDE-ARL funding (W911NF-12-0022). AMS is also partially supported by NSF (DMS 1818977) and AFOSR (FA9550-17-1-0185). BH is partially supported by a Von Kármán instructorship at the California Institute of Technology.Attached Files
Submitted - 2005.03180.pdf
Files
Name | Size | Download all |
---|---|---|
md5:f9704e4458bc51a3e7bb557e77bda30d
|
2.0 MB | Preview Download |
Additional details
- Eprint ID
- 103483
- Resolver ID
- CaltechAUTHORS:20200527-074228185
- Army Research Laboratory
- W911NF-12-0022
- NSF
- DMS-1818977
- Air Force Office of Scientific Research (AFOSR)
- FA9550-17-1-0185
- Caltech
- Created
-
2020-05-27Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field