CaltechAUTHORS
  A Caltech Library Service

The driver and the engineer: Reinforcement learning and robust control

Bernat, Natalie and Chen, Jiexin and Matni, Nikolai and Doyle, John (2020) The driver and the engineer: Reinforcement learning and robust control. In: 2020 American Control Conference (ACC). IEEE , Piscataway, NJ, pp. 3932-3939. ISBN 9781538682661. https://resolver.caltech.edu/CaltechAUTHORS:20200730-143943072

Full text is not posted in this repository. Consult Related URLs below.

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20200730-143943072

Abstract

Reinforcement learning (RL) and other AI methods are exciting approaches to data-driven control design, but RL's emphasis on maximizing expected performance contrasts with robust control theory (RCT), which puts central emphasis on the impact of model uncertainty and worst case scenarios. This paper argues that these approaches are potentially complementary, roughly analogous to that of a driver and an engineer in, say, formula one racing. Each is indispensable but with radically different roles. If RL takes the driver seat in safety critical applications, RCT may still play a role in plant design, and also in diagnosing and mitigating the effects of performance degradation due to changes or failures in component or environments. While much RCT research emphasizes synthesis of controllers, as does RL, in practice RCT's impact has perhaps already been greater in using hard limits and tradeoffs on robust performance to provide insight into plant design, interpreted broadly as including sensor, actuator, communications, and computer selection and placement in addition to core plant dynamics. More automation may ultimately require more rigor and theory, not less, if our systems are going to be both more efficient and robust. Here we use the simplest possible toy model to illustrate how RCT can potentially augment RL in finding mechanistic explanations when control is not merely hard, but impossible, and issues in making them more compatibly data-driven. Despite the simplicity, questions abound. We also discuss the relevance of these ideas to more realistic challenges.


Item Type:Book Section
Related URLs:
URLURL TypeDescription
https://doi.org/10.23919/acc45564.2020.9147347DOIArticle
ORCID:
AuthorORCID
Matni, Nikolai0000-0003-4936-3921
Doyle, John0000-0002-1828-2486
Additional Information:© 2020 AACC.
DOI:10.23919/acc45564.2020.9147347
Record Number:CaltechAUTHORS:20200730-143943072
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20200730-143943072
Official Citation:N. Bernat, J. Chen, N. Matni and J. Doyle, "The driver and the engineer: Reinforcement learning and robust control," 2020 American Control Conference (ACC), Denver, CO, USA, 2020, pp. 3932-3939, doi: 10.23919/ACC45564.2020.9147347
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:104665
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:31 Jul 2020 14:31
Last Modified:16 Nov 2021 18:33

Repository Staff Only: item control page