CaltechAUTHORS
  A Caltech Library Service

A Framework for Machine Learning of Model Error in Dynamical Systems

Levine, Matthew E. and Stuart, Andrew M. (2021) A Framework for Machine Learning of Model Error in Dynamical Systems. . (Unpublished) https://resolver.caltech.edu/CaltechAUTHORS:20210719-210139286

[img] PDF - Submitted Version
Creative Commons Attribution.

4MB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20210719-210139286

Abstract

The development of data-informed predictive models for dynamical systems is of widespread interest in many disciplines. We present a unifying framework for blending mechanistic and machine-learning approaches to identify dynamical systems from data. We compare pure data-driven learning with hybrid models which incorporate imperfect domain knowledge. We cast the problem in both continuous- and discrete-time, for problems in which the model error is memoryless and in which it has significant memory, and we compare data-driven and hybrid approaches experimentally. Our formulation is agnostic to the chosen machine learning model. Using Lorenz '63 and Lorenz '96 Multiscale systems, we find that hybrid methods substantially outperform solely data-driven approaches in terms of data hunger, demands for model complexity, and overall predictive performance. We also find that, while a continuous-time framing allows for robustness to irregular sampling and desirable domain-interpretability, a discrete-time framing can provide similar or better predictive performance, especially when data are undersampled and the vector field cannot be resolved. We study model error from the learning theory perspective, defining excess risk and generalization error; for a linear model of the error used to learn about ergodic dynamical systems, both errors are bounded by terms that diminish with the square-root of T. We also illustrate scenarios that benefit from modeling with memory, proving that continuous-time recurrent neural networks (RNNs) can, in principle, learn memory-dependent model error and reconstruct the original system arbitrarily well; numerical results depict challenges in representing memory by this approach. We also connect RNNs to reservoir computing and thereby relate the learning of memory-dependent error to recent work on supervised learning between Banach spaces using random features.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/2107.06658arXivDiscussion Paper
ORCID:
AuthorORCID
Levine, Matthew E.0000-0002-5627-3169
Additional Information:Attribution 4.0 International (CC BY 4.0) The authors are grateful to David Albers, Oliver Dunbar, Ian Melbourne, and Yisong Yue for helpful discussions. The work of MEL and AMS was supported by NIH RO1 LM012734 "Mechanistic Machine Learning". MEL is also supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1745301. AMS is also supported by NSF (award AGS-1835860), NSF (award DMS-1818977), the Office of Naval Research (award N00014-17-1-2079), and the AFOSR under MURI award number FA9550-20-1-0358 (Machine Learning and Physics-Based Modeling and Simulation).
Funders:
Funding AgencyGrant Number
NIHRO1 LM012734
NSF Graduate Research FellowshipDGE-1745301
NSFAGS-1835860
NSFDMS-1818977
Office of Naval Research (ONR)N00014-17-1-2079
Air Force Office of Scientific Research (AFOSR)FA9550-20-1-0358
Subject Keywords:Dynamical Systems, Model Error, Statistical Learning, Random Features, Recurrent Neural Networks, Reservoir Computing
Record Number:CaltechAUTHORS:20210719-210139286
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20210719-210139286
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:109919
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:19 Jul 2021 21:29
Last Modified:19 Jul 2021 21:29

Repository Staff Only: item control page