Welcome to the new version of CaltechAUTHORS. Login is currently restricted to library staff. If you notice any issues, please email coda@library.caltech.edu
Published July 2012 | Published
Book Section - Chapter Open

Hierarchical Exploration for Accelerating Contextual Bandits


Contextual bandit learning is an increasingly popular approach to optimizing recommender systems via user feedback, but can be slow to converge in practice due to the need for exploring a large feature space. In this paper, we propose a coarse-to-fine hierarchical approach for encoding prior knowledge that drastically reduces the amount of exploration required. Intuitively, user preferences can be reasonably embedded in a coarse low-dimensional feature space that can be explored efficiently, requiring exploration in the high-dimensional space only as necessary. We introduce a bandit algorithm that explores within this coarse-to-fine spectrum, and prove performance guarantees that depend on how well the coarse space captures the user's preferences. We demonstrate substantial improvement over conventional bandit algorithms through extensive simulation as well as a live user study in the setting of personalized news recommendation.

Additional Information

© 2012 by the author(s)/owner(s). The authors thank the anonymous reviewers for their helpful comments. The authors also thank Khalid El-Arini for help with data collection and processing. This work was supported in part by ONR (PECASE) N000141010672, ONR Young Investigator Program N00014-08-1-0752, and by the Intel Science and Technology Center for Embedded Computing.

Attached Files

Published - 933.pdf


Files (1.6 MB)
Name Size Download all
1.6 MB Preview Download

Additional details

August 19, 2023
August 19, 2023