CaltechAUTHORS
  A Caltech Library Service

Imitation-Projected Policy Gradient for Programmatic Reinforcement Learning

Verma, Abhinav and Le, Hoang M. and Yue, Yisong and Chaudhuri, Swarat (2019) Imitation-Projected Policy Gradient for Programmatic Reinforcement Learning. In: 33rd Conference on Neural Information Processing Systems. Neural Information Processing Systems Foundation, Inc. , Art. No. 9705. https://resolver.caltech.edu/CaltechAUTHORS:20190905-154314013

[img] PDF - Published Version
See Usage Policy.

685kB
[img] PDF - Submitted Version
See Usage Policy.

812kB
[img] Archive (ZIP) - Supplemental Material
See Usage Policy.

669kB

Use this Persistent URL to link to this item: https://resolver.caltech.edu/CaltechAUTHORS:20190905-154314013

Abstract

We study the problem of programmatic reinforcement learning, in which policies are represented as short programs in a symbolic language. Programmatic policies can be more interpretable, generalizable, and amenable to formal verification than neural policies; however, designing rigorous learning approaches for such policies remains a challenge. Our approach to this challenge - a meta-algorithm called PROPEL - is based on three insights. First, we view our learning task as optimization in policy space, modulo the constraint that the desired policy has a programmatic representation, and solve this optimization problem using a form of mirror descent that takes a gradient step into the unconstrained policy space and then projects back onto the constrained space. Second, we view the unconstrained policy space as mixing neural and programmatic representations, which enables employing state-of-the-art deep policy gradient approaches. Third, we cast the projection step as program synthesis via imitation learning, and exploit contemporary combinatorial methods for this task. We present theoretical convergence results for PROPEL and empirically evaluate the approach in three continuous control domains. The experiments show that PROPEL can significantly outperform state-of-the-art approaches for learning programmatic policies.


Item Type:Book Section
Related URLs:
URLURL TypeDescription
https://papers.nips.cc/paper/9705-imitation-projected-programmatic-reinforcement-learningPublisherArticle
https://arxiv.org/abs/1907.05431arXivDiscussion Paper
ORCID:
AuthorORCID
Verma, Abhinav0000-0002-9820-8285
Yue, Yisong0000-0001-9127-1989
Additional Information:© 2019 Neural Information Processing Systems Foundation, Inc. This work was supported in part by United States Air Force Contract # FA8750-19-C-0092, NSF Award # 1645832, NSF Award # CCF-1704883, the Okawa Foundation, Raytheon, PIMCO, and Intel.
Funders:
Funding AgencyGrant Number
Air Force Office of Scientific Research (AFOSR)FA8750-19-C-0092
NSFCNS-1645832
NSFCCF-1704883
Okawa FoundationUNSPECIFIED
Raytheon CompanyUNSPECIFIED
PIMCOUNSPECIFIED
IntelUNSPECIFIED
Record Number:CaltechAUTHORS:20190905-154314013
Persistent URL:https://resolver.caltech.edu/CaltechAUTHORS:20190905-154314013
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:98460
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:05 Sep 2019 23:10
Last Modified:09 Jul 2020 21:39

Repository Staff Only: item control page