A Caltech Library Service

Inverse Reinforcement Learning in Large State Spaces via Function Approximation

Li, Kun and Burdick, Joel W. (2017) Inverse Reinforcement Learning in Large State Spaces via Function Approximation. . (Unpublished)

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


This paper introduces a new method for inverse reinforcement learning in large-scale and high-dimensional state spaces. To avoid solving the computationally expensive reinforcement learning problems in reward learning, we propose a function approximation method to ensure that the Bellman Optimality Equation always holds, and then estimate a function to maximize the likelihood of the observed motion. The time complexity of the proposed method is linearly proportional to the cardinality of the action set, thus it can handle large state spaces efficiently. We test the proposed method in a simulated environment, and show that it is more accurate than existing methods and significantly better in scalability. We also show that the proposed method can extend many existing methods to high-dimensional state spaces. We then apply the method to evaluating the effect of rehabilitative stimulations on patients with spinal cord injuries based on the observed patient motions.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Additional Information:This work was supported by the National Institutes of Health, NIBIB.
Funding AgencyGrant Number
Record Number:CaltechAUTHORS:20190410-120633713
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:94633
Deposited By: George Porter
Deposited On:11 Apr 2019 14:30
Last Modified:03 Oct 2019 21:05

Repository Staff Only: item control page