Published October 6, 2014 | Version Published
Book Section - Chapter Open

Clinical Online Recommendation with Subgroup Rank Feedback

  • 1. ROR icon California Institute of Technology

Abstract

Many real applications in experimental design need to make decisions online. Each decision leads to a stochastic reward with initially unknown distribution. New decisions are made based on the observations of previous rewards. To maximize the total reward, one needs to solve the tradeoff between exploring different strategies and exploiting currently optimal strategies. This kind of tradeoff problems can be formalized as Multi-armed bandit problem. We recommend strategies in series and generate new recommendations based on noisy rewards of previous strategies. When the reward for a strategy is difficult to quantify, classical bandit algorithms are no longer optimal. This paper, studies the Multi-armed bandit problem with feedback given as a stochastic rank list instead of quantified reward values. We propose an algorithm for this new problem and show its optimality. A real application of this algorithm on clinical treatment is helping paralyzed patient to regain the ability to stand on their own feet.

Additional Information

Copyright is held by the owner/author(s). Publication rights licensed to ACM. This work was supported by the the Helmsley Foundation, the Christopher and Dana Reeve Foundation, and the National Institutes of Health (NIH).

Attached Files

Published - p289-sui.pdf

Files

p289-sui.pdf

Files (895.2 kB)

Name Size Download all
md5:fba79c2c91ec719b0fd440b84577b8b3
895.2 kB Preview Download

Additional details

Identifiers

Eprint ID
50398
Resolver ID
CaltechAUTHORS:20141015-100508890

Funding

Helmsley Foundation
Christopher and Dana Reeve Foundation
NIH

Dates

Created
2020-03-09
Created from EPrint's datestamp field
Updated
2021-11-10
Created from EPrint's last_modified field