Thompson Sampling Achieves Õ(√T) Regret in Linear Quadratic Control
Abstract
Thompson Sampling (TS) is an efficient method for decision-making under uncertainty, where an action is sampled from a carefully prescribed distribution which is updated based on the observed data. In this work, we study the problem of adaptive control of stabilizable linear-quadratic regulators (LQRs) using TS, where the system dynamics are unknown. Previous works have established that Õ(√T) frequentist regret is optimal for the adaptive control of LQRs. However, the existing methods either work only in restrictive settings, require a priori known stabilizing controllers, or utilize computationally intractable approaches. We propose an efficient TS algorithm for the adaptive control of LQRs, TS-based Adaptive Control, TSAC, that attains Õ(√T)regret, even for multidimensional systems, thereby solving the open problem posed in Abeille and Lazaric (2018). TSAC does not require a priori known stabilizing controller and achieves fast stabilization of the underlying system by effectively exploring the environment in the early stages. Our result hinges on developing a novel lower bound on the probability that the TS provides an optimistic sample. By carefully prescribing an early exploration strategy and a policy update rule, we show that TS achieves order-optimal regret in adaptive control of multidimensional stabilizable LQRs. We empirically demonstrate the performance and the efficiency of TSAC in several adaptive control tasks.
Additional Information
© 2022 T. Kargin, S. Lale, K. Azizzadenesheli, A. Anandkumar & B. Hassibi.Attached Files
Published - kargin22a.pdf
Files
Name | Size | Download all |
---|---|---|
md5:381dff1af93fe0715f5877653c9da3cd
|
617.4 kB | Preview Download |
Additional details
- Eprint ID
- 115576
- Resolver ID
- CaltechAUTHORS:20220714-212445251
- Created
-
2022-07-14Created from EPrint's datestamp field
- Updated
-
2023-06-02Created from EPrint's last_modified field