Explore More and Improve Regret in Linear Quadratic Regulators
Abstract
Stabilizing the unknown dynamics of a control system and minimizing regret in control of an unknown system are among the main goals in control theory and reinforcement learning. In this work, we pursue both these goals for adaptive control of linear quadratic regulators (LQR). Prior works accomplish either one of these goals at the cost of the other one. The algorithms that are guaranteed to find a stabilizing controller suffer from high regret, whereas algorithms that focus on achieving low regret assume the presence of a stabilizing controller at the early stages of agent-environment interaction. In the absence of such a stabilizing controller, at the early stages, the lack of reasonable model estimates needed for (i) strategic exploration and (ii) design of controllers that stabilize the system, results in regret that scales exponentially in the problem dimensions. We propose a framework for adaptive control that exploits the characteristics of linear dynamical systems and deploys additional exploration in the early stages of agent-environment interaction to guarantee sooner design of stabilizing controllers. We show that for the classes of controllable and stabilizable LQRs, where the latter is a generalization of prior work, these methods achieve O(√T) regret with a polynomial dependence in the problem dimensions.
Attached Files
Submitted - 2007.12291.pdf
Files
Name | Size | Download all |
---|---|---|
md5:659142a3015c3734b19a7e16aafde819
|
896.5 kB | Preview Download |
Additional details
- Eprint ID
- 106484
- Resolver ID
- CaltechAUTHORS:20201106-120155157
- Created
-
2020-11-06Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field