A Caltech Library Service

Online Inverse Reinforcement Learning via Bellman Gradient Iteration

Li, Kun and Burdick, Joel W. (2017) Online Inverse Reinforcement Learning via Bellman Gradient Iteration. . (Unpublished)

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


This paper develops an online inverse reinforcement learning algorithm aimed at efficiently recovering a reward function from ongoing observations of an agent's actions. To reduce the computation time and storage space in reward estimation, this work assumes that each observed action implies a change of the Q-value distribution, and relates the change to the reward function via the gradient of Q-value with respect to reward function parameter. The gradients are computed with a novel Bellman Gradient Iteration method that allows the reward function to be updated whenever a new observation is available. The method's convergence to a local optimum is proved. This work tests the proposed method in two simulated environments, and evaluates the algorithm's performance under a linear reward function and a non-linear reward function. The results show that the proposed algorithm only requires a limited computation time and storage space, but achieves an increasing accuracy as the number of observations grows. We also present a potential application to robot cleaners at home.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Additional Information:This work was supported by the National Institutes of Health, NIBIB.
Funding AgencyGrant Number
Record Number:CaltechAUTHORS:20190410-120637140
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:94634
Deposited By: George Porter
Deposited On:11 Apr 2019 14:28
Last Modified:03 Oct 2019 21:05

Repository Staff Only: item control page