A Caltech Library Service

A Multi-Armed Bandit Approach for Online Expert Selection in Markov Decision Processes

Mazumdar, Eric and Dong, Roy and Rúbies Royo, Vicenç and Tomlin, Claire and Sastry, S. Shankar (2017) A Multi-Armed Bandit Approach for Online Expert Selection in Markov Decision Processes. . (Unpublished)

[img] PDF - Submitted Version
See Usage Policy.


Use this Persistent URL to link to this item:


We formulate a multi-armed bandit (MAB) approach to choosing expert policies online in Markov decision processes (MDPs). Given a set of expert policies trained on a state and action space, the goal is to maximize the cumulative reward of our agent. The hope is to quickly find the best expert in our set. The MAB formulation allows us to quantify the performance of an algorithm in terms of the regret incurred from not choosing the best expert from the beginning. We first develop the theoretical framework for MABs in MDPs, and then present a basic regret decomposition identity. We then adapt the classical Upper Confidence Bounds algorithm to the problem of choosing experts in MDPs and prove that the expected regret grows at worst at a logarithmic rate. Lastly, we validate the theory on a small MDP.

Item Type:Report or Paper (Discussion Paper)
Related URLs:
URLURL TypeDescription Paper
Mazumdar, Eric0000-0002-1815-269X
Dong, Roy0000-0001-8034-4329
Tomlin, Claire0000-0003-3192-3185
Record Number:CaltechAUTHORS:20210903-213649911
Persistent URL:
Usage Policy:No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code:110718
Deposited By: George Porter
Deposited On:07 Sep 2021 16:44
Last Modified:07 Sep 2021 16:44

Repository Staff Only: item control page