Qin, Zengyi and Chen, Yuxiao and Fan, Chuchu (2021) Density Constrained Reinforcement Learning. Proceedings of Machine Learning Research, 139 . pp. 8682-8692. ISSN 2640-3498. https://resolver.caltech.edu/CaltechAUTHORS:20220622-204900353
![]() |
PDF
- Published Version
See Usage Policy. 2MB |
![]() |
PDF
- Accepted Version
See Usage Policy. 3MB |
![]() |
PDF
- Supplemental Material
See Usage Policy. 1MB |
Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20220622-204900353
Abstract
We study constrained reinforcement learning (CRL) from a novel perspective by setting constraints directly on state density functions, rather than the value functions considered by previous works. State density has a clear physical and mathematical interpretation, and is able to express a wide variety of constraints such as resource limits and safety requirements. Density constraints can also avoid the time-consuming process of designing and tuning cost functions required by value function-based constraints to encode system specifications. We leverage the duality between density functions and Q functions to develop an effective algorithm to solve the density constrained RL problem optimally and the constrains are guaranteed to be satisfied. We prove that the proposed algorithm converges to a near-optimal solution with a bounded error even when the policy update is imperfect. We use a set of comprehensive experiments to demonstrate the advantages of our approach over state-of-the-art CRL methods, with a wide range of density constrained tasks as well as standard CRL benchmarks such as Safety-Gym.
Item Type: | Article | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Related URLs: |
| |||||||||
ORCID: |
| |||||||||
Additional Information: | © 2021 by the author(s). The authors acknowledge support from the DARPA Assured Autonomy under contract FA8750-19-C-0089 and from the Defense Science and Technology Agency in Singapore. The views, opinions, and/or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the Department of Defense, the U.S. Government, DSTA Singapore, or the Singapore Government. | |||||||||
Funders: |
| |||||||||
Record Number: | CaltechAUTHORS:20220622-204900353 | |||||||||
Persistent URL: | https://resolver.caltech.edu/CaltechAUTHORS:20220622-204900353 | |||||||||
Official Citation: | Qin, Z., Chen, Y., Fan, C. (2021). Density Constrained Reinforcement Learning. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research, 139:8682-8692. | |||||||||
Usage Policy: | No commercial reproduction, distribution, display or performance rights in this work are provided. | |||||||||
ID Code: | 115234 | |||||||||
Collection: | CaltechAUTHORS | |||||||||
Deposited By: | Tony Diaz | |||||||||
Deposited On: | 28 Jun 2022 19:20 | |||||||||
Last Modified: | 28 Jun 2022 19:20 |
Repository Staff Only: item control page