Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published March 17, 2022 | Submitted + Published + Supplemental Material
Journal Article Open

Scientific multi-agent reinforcement learning for wall-models of turbulent flows


The predictive capabilities of turbulent flow simulations, critical for aerodynamic design and weather prediction, hinge on the choice of turbulence models. The abundance of data from experiments and simulations and the advent of machine learning have provided a boost to turbulence modeling efforts. However, simulations of turbulent flows remain hindered by the inability of heuristics and supervised learning to model the near-wall dynamics. We address this challenge by introducing scientific multi-agent reinforcement learning (SciMARL) for the discovery of wall models for large-eddy simulations (LES). In SciMARL, discretization points act also as cooperating agents that learn to supply the LES closure model. The agents self-learn using limited data and generalize to extreme Reynolds numbers and previously unseen geometries. The present simulations reduce by several orders of magnitude the computational cost over fully-resolved simulations while reproducing key flow quantities. We believe that SciMARL creates unprecedented capabilities for the simulation of turbulent flows.

Additional Information

© The Author(s) 2022. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. Received 29 May 2021; Accepted 14 February 2022; Published 17 March 2022. The authors acknowledge the support of Air Force Office of Scientific Research (AFOSR) Multidisciplinary University Research Initiative (MURI) project: Prediction, Statistical Quantification, and Mitigation of Extreme Events Caused by Exogenous Causes or Intrinsic Instabilities under grant number FA9550-21-1-0058. Computational resources were provided by the Swiss National Supercomputing Centre (CSCS) Project s929. Data availability: All the data analyzed in this paper were produced with an in-house flow solver and an open-source reinforcement learning software described in the code availability statement. Reference data and the scripts used to produce the data figures is available through GitHub (https://github.com/hjbae/SciMARL_WMLES). Code availability: The wall-modeled large-eddy simulations were performed with a in-house flow solver, which is available on demand. The wall models were trained with the reinforcement learning library smarties (https://github.com/cselab/smarties). Contributions: H.J.B. jointly conceived the study with P.K., designed and performed experiments, analyzed the data, and wrote the paper; P.K. devised the concept of SciMARL, supervised the project, and edited the manuscript. The authors declare no competing interests. Peer review information: Nature Communications thanks the anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.

Attached Files

Published - s41467-022-28957-7.pdf

Submitted - 2106.11144.pdf

Supplemental Material - 41467_2022_28957_MOESM1_ESM.pdf


Files (5.5 MB)
Name Size Download all
3.8 MB Preview Download
620.9 kB Preview Download
1.1 MB Preview Download

Additional details

August 22, 2023
October 23, 2023