Published June 3, 2022 | Version Submitted
Discussion Paper Open

KCRL: Krasovskii-Constrained Reinforcement Learning with Guaranteed Stability in Nonlinear Dynamical Systems

Abstract

Learning a dynamical system requires stabilizing the unknown dynamics to avoid state blow-ups. However, current reinforcement learning (RL) methods lack stabilization guarantees, which limits their applicability for the control of safety-critical systems. We propose a model-based RL framework with formal stability guarantees, Krasovskii Constrained RL (KCRL), that adopts Krasovskii's family of Lyapunov functions as a stability constraint. The proposed method learns the system dynamics up to a confidence interval using feature representation, e.g. Random Fourier Features. It then solves a constrained policy optimization problem with a stability constraint based on Krasovskii's method using a primal-dual approach to recover a stabilizing policy. We show that KCRL is guaranteed to learn a stabilizing policy in a finite number of interactions with the underlying unknown system. We also derive the sample complexity upper bound for stabilization of unknown nonlinear dynamical systems via the KCRL framework.

Attached Files

Submitted - 2206.01704.pdf

Files

2206.01704.pdf

Files (219.4 kB)

Name Size Download all
md5:ca1b5d20df394b1fcb5f79340c449d4d
219.4 kB Preview Download

Additional details

Identifiers

Eprint ID
115581
Resolver ID
CaltechAUTHORS:20220714-212504090

Related works

Dates

Created
2022-07-15
Created from EPrint's datestamp field
Updated
2023-06-02
Created from EPrint's last_modified field