Ravi Tej, Akella and Azizzadenesheli, Kamyar and Ghavamzadeh, Mohammad and Anandkumar, Anima and Yue, Yisong (2021) Deep Bayesian Quadrature Policy Optimization. In: Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21). Proceedings of the AAAI Conference on Artificial Intelligence. Vol.35. Association for the Advancement of Artificial Intelligence , Palo Alto, CA, pp. 6600-6608. https://resolver.caltech.edu/CaltechAUTHORS:20201106-120212166
![]() |
PDF
- Submitted Version
See Usage Policy. 9MB |
Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20201106-120212166
Abstract
We study the problem of obtaining accurate policy gradient estimates using a finite number of samples. Monte-Carlo methods have been the default choice for policy gradient estimation, despite suffering from high variance in the gradient estimates. On the other hand, more sample efficient alternatives like Bayesian quadrature methods have received little attention due to their high computational complexity. In this work, we propose deep Bayesian quadrature policy gradient (DBQPG), a computationally efficient high-dimensional generalization of Bayesian quadrature, for policy gradient estimation. We show that DBQPG can substitute Monte-Carlo estimation in policy gradient methods, and demonstrate its effectiveness on a set of continuous control benchmarks. In comparison to Monte-Carlo estimation, DBQPG provides (i) more accurate gradient estimates with a significantly lower variance, (ii) a consistent improvement in the sample complexity and average return for several deep policy gradient algorithms, and, (iii) the uncertainty in gradient estimation that can be incorporated to further improve the performance.
Item Type: | Book Section | ||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Related URLs: |
| ||||||||||||||||||
ORCID: |
| ||||||||||||||||||
Additional Information: | © 2021 Association for the Advancement of Artificial Intelligence. Published: 2021-05-18. K. Azizzadenesheli is supported in part by Raytheon and Amazon Web Service. A. Anandkumar is supported in part by Bren endowed chair, DARPA PAIHR00111890035 and LwLL grants, Raytheon, Microsoft, Google, and Adobe faculty fellowships. | ||||||||||||||||||
Funders: |
| ||||||||||||||||||
Subject Keywords: | Reinforcement Learning | ||||||||||||||||||
Series Name: | Proceedings of the AAAI Conference on Artificial Intelligence | ||||||||||||||||||
DOI: | 10.48550/arXiv.2006.15637 | ||||||||||||||||||
Record Number: | CaltechAUTHORS:20201106-120212166 | ||||||||||||||||||
Persistent URL: | https://resolver.caltech.edu/CaltechAUTHORS:20201106-120212166 | ||||||||||||||||||
Official Citation: | Akella, R. T., Azizzadenesheli, K., Ghavamzadeh, M., Anandkumar, A., & Yue, Y. (2021). Deep Bayesian Quadrature Policy Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6600-6608 | ||||||||||||||||||
Usage Policy: | No commercial reproduction, distribution, display or performance rights in this work are provided. | ||||||||||||||||||
ID Code: | 106489 | ||||||||||||||||||
Collection: | CaltechAUTHORS | ||||||||||||||||||
Deposited By: | George Porter | ||||||||||||||||||
Deposited On: | 06 Nov 2020 22:36 | ||||||||||||||||||
Last Modified: | 02 Jun 2023 01:08 |
Repository Staff Only: item control page