CaltechAUTHORS
  A Caltech Library Service

Experimental results: Reinforcement Learning of POMDPs using Spectral Methods

Azizzadenesheli, Kamyar and Lazaric, Alessandro and Anandkumar, Animashree (2017) Experimental results: Reinforcement Learning of POMDPs using Spectral Methods. . (Unpublished) http://resolver.caltech.edu/CaltechAUTHORS:20190327-085721979

[img] PDF - Submitted Version
See Usage Policy.

365Kb

Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:20190327-085721979

Abstract

We propose a new reinforcement learning algorithm for partially observable Markov decision processes (POMDP) based on spectral decomposition methods. While spectral methods have been previously employed for consistent learning of (passive) latent variable models such as hidden Markov models, POMDPs are more challenging since the learner interacts with the environment and possibly changes the future observations in the process. We devise a learning algorithm running through epochs, in each epoch we employ spectral techniques to learn the POMDP parameters from a trajectory generated by a fixed policy. At the end of the epoch, an optimization oracle returns the optimal memoryless planning policy which maximizes the expected reward based on the estimated POMDP model. We prove an order-optimal regret bound with respect to the optimal memoryless policy and efficient scaling with respect to the dimensionality of observation and action spaces.


Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription
http://arxiv.org/abs/1705.02553arXivDiscussion Paper
Record Number:CaltechAUTHORS:20190327-085721979
Persistent URL:http://resolver.caltech.edu/CaltechAUTHORS:20190327-085721979
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:94166
Collection:CaltechAUTHORS
Deposited By: George Porter
Deposited On:28 Mar 2019 22:27
Last Modified:28 Mar 2019 22:27

Repository Staff Only: item control page