Deep reinforcement learning of airfoil pitch control in a highly disturbed environment using partial observations
Abstract
This study explores the application of deep reinforcement learning (RL) to design an airfoil pitch controller capable of minimizing lift variations in randomly disturbed flows. The controller, treated as an agent in a partially observable Markov decision process, receives non-Markovian observations from the environment, simulating practical constraints where flow information is limited to force and pressure sensors. Deep RL, particularly the TD3 algorithm, is used to approximate an optimal control policy under such conditions. Testing is conducted for a flat plate airfoil in two environments: a classical unsteady environment with vertical acceleration disturbances (i.e., a Wagner setup) and a viscous flow model with pulsed point force disturbances. In both cases, augmenting observations of the lift, pitch angle, and angular velocity with extra wake information (e.g., from pressure sensors) and retaining memory of past observations enhances RL control performance. Results demonstrate the capability of RL control to match or exceed standard linear controllers in minimizing lift variations. Special attention is given to the choice of training data and the generalization to unseen disturbances.
Copyright and License
© 2024 American Physical Society.
Funding
Support for this work by the National Science Foundation under Award No. 2247005 is gratefully acknowledged.
Files
Name | Size | Download all |
---|---|---|
md5:a1fff29d715d1e4e28b443961e7c3ab2
|
3.0 MB | Preview Download |
Additional details
- National Science Foundation
- CBET-2247005
- Accepted
-
2024-08-28Accepted
- Caltech groups
- GALCIT
- Publication Status
- Published