Published March 2025 | Version Published
Journal Article

An operator learning perspective on parameter-to-observable maps

  • 1. ROR icon California Institute of Technology

Abstract

Computationally efficient surrogates for parametrized physical models play a crucial role in science and engineering. Operator learning provides data-driven surrogates that map between function spaces. However, instead of full-field measurements, often the available data are only finite-dimensional parametrizations of model inputs or finite observables of model outputs. Building on Fourier Neural Operators, this paper introduces the Fourier Neural Mappings (FNMs) framework that is able to accommodate such finite-dimensional vector inputs or outputs. The paper develops universal approximation theorems for the method. Moreover, in many applications the underlying parameter-to-observable (PtO) map is defined implicitly through an infinite-dimensional operator, such as the solution operator of a partial differential equation. A natural question is whether it is more data-efficient to learn the PtO map end-to-end or first learn the solution operator and subsequently compute the observable from the full-field solution. A theoretical analysis of Bayesian nonparametric regression of linear functionals, which is of independent interest, suggests that the end-to-end approach can actually have worse sample complexity. Extending beyond the theory, numerical results for the FNM approximation of three nonlinear PtO maps demonstrate the benefits of the operator learning perspective that this paper adopts.

Copyright and License

© 2024 American Institute of Mathematical Sciences.

Acknowledgement

The first author is supported by the high-performance computing platform of Peking University. The second author acknowledges support from the National Science Foundation Graduate Research Fellowship Program under award number DGE-1745301 and from the Amazon/Caltech AI4Science Fellowship, and partial support from the Air Force Office of Scientific Research under MURI award number FA9550-20-1-0358 (Machine Learning and Physics-Based Modeling and Simulation). The third author is supported by the Department of Energy Computational Science Graduate Fellowship under award number DE-SC00211. The second and third authors are also grateful for partial support from the Department of Defense Vannevar Bush Faculty Fellowship held by Andrew M. Stuart under Office of Naval Research award number N00014-22-1-2790.

The computations presented in this paper were partially conducted on the Resnick High Performance Computing Center, a facility supported by the Resnick Sustainability Institute at the California Institute of Technology. The authors thank Kaushik Bhattacharya for useful discussions about learning functionals, Andrew Stuart for helpful remarks about the universal approximation theory, and Zachary Morrow for providing the code for the advection–diffusion equation solver. The authors are also grateful for the helpful feedback from two anonymous referees.

Data Availability

Links to datasets and all code used to produce the numerical results and figures in this paper are available at

https://github.com/nickhnelsen/fourier-neural-mappings.

Additional details

Related works

Is new version of
Discussion Paper: arXiv:2402.06031 (arXiv)
Is supplemented by
Dataset: https://github.com/nickhnelsen/fourier-neural-mappings (URL)

Funding

Peking University
National Science Foundation
DGE-1745301
California Institute of Technology
Amazon/Caltech AI4Science Fellowship
United States Air Force Office of Scientific Research
FA9550-20-1-0358
United States Department of Energy
DE-SC00211
United States Department of Defense
Vannevar Bush Faculty Fellowship
Office of Naval Research
N00014-22-1-2790

Dates

Available
2024-08
Early access

Caltech Custom Metadata

Caltech groups
Resnick Sustainability Institute
Publication Status
Published