Published July 2012 | Version Published
Book Section - Chapter Open

Hierarchical Exploration for Accelerating Contextual Bandits

Abstract

Contextual bandit learning is an increasingly popular approach to optimizing recommender systems via user feedback, but can be slow to converge in practice due to the need for exploring a large feature space. In this paper, we propose a coarse-to-fine hierarchical approach for encoding prior knowledge that drastically reduces the amount of exploration required. Intuitively, user preferences can be reasonably embedded in a coarse low-dimensional feature space that can be explored efficiently, requiring exploration in the high-dimensional space only as necessary. We introduce a bandit algorithm that explores within this coarse-to-fine spectrum, and prove performance guarantees that depend on how well the coarse space captures the user's preferences. We demonstrate substantial improvement over conventional bandit algorithms through extensive simulation as well as a live user study in the setting of personalized news recommendation.

Additional Information

© 2012 by the author(s)/owner(s). The authors thank the anonymous reviewers for their helpful comments. The authors also thank Khalid El-Arini for help with data collection and processing. This work was supported in part by ONR (PECASE) N000141010672, ONR Young Investigator Program N00014-08-1-0752, and by the Intel Science and Technology Center for Embedded Computing.

Attached Files

Published - 933.pdf

Files

933.pdf

Files (1.6 MB)

Name Size Download all
md5:0f5b7454b681841a83e7531f44312d2e
1.6 MB Preview Download

Additional details

Identifiers

Eprint ID
94188
Resolver ID
CaltechAUTHORS:20190327-085835163

Related works

Funding

Office of Naval Research (ONR)
N00014-10-1-0672
Office of Naval Research (ONR)
N00014-08-1-0752
Intel Science and Technology Center for Embedded Computing

Dates

Created
2019-03-27
Created from EPrint's datestamp field
Updated
2023-06-02
Created from EPrint's last_modified field