Published October 20, 2020 | Version Accepted Version
Discussion Paper Open

Iterative Amortized Policy Optimization

Abstract

Policy networks are a central feature of deep reinforcement learning (RL) algorithms for continuous control, enabling the estimation and sampling of high-value actions. From the variational inference perspective on RL, policy networks, when employed with entropy or KL regularization, are a form of amortized optimization, optimizing network parameters rather than the policy distributions directly. However, this direct amortized mapping can empirically yield suboptimal policy estimates. Given this perspective, we consider the more flexible class of iterative amortized optimizers. We demonstrate that the resulting technique, iterative amortized policy optimization, yields performance improvements over conventional direct amortization methods on benchmark continuous control tasks.

Additional Information

JM acknowledges Scott Fujimoto for helpful discussions. This work was funded in part by NSF #1918839 and Beyond Limits. JM is currently employed by Google DeepMind. The authors declare no other competing interests related to this work.

Attached Files

Accepted Version - 2010.10670.pdf

Files

2010.10670.pdf

Files (10.4 MB)

Name Size Download all
md5:8581ae891f85d38907cb4a159bedda1e
10.4 MB Preview Download

Additional details

Identifiers

Eprint ID
106584
Resolver ID
CaltechAUTHORS:20201110-082336091

Funding

NSF
CCF-1918839
Beyond Limits

Dates

Created
2020-11-10
Created from EPrint's datestamp field
Updated
2023-06-02
Created from EPrint's last_modified field

Caltech Custom Metadata

Caltech groups
Center for Autonomous Systems and Technologies (CAST)