Computer Science – Learning
Scientific paper
2011-03-23
Computer Science
Learning
Published at ICML 2011, 8 pages, 6 figures
Scientific paper
We study decision making in environments where the reward is only partially observed, but can be modeled as a function of an action and an observed context. This setting, known as contextual bandits, encompasses a wide variety of applications including health-care policy and Internet advertising. A central task is evaluation of a new policy given historic data consisting of contexts, actions and received rewards. The key challenge is that the past data typically does not faithfully represent proportions of actions taken by a new policy. Previous approaches rely either on models of rewards or models of the past policy. The former are plagued by a large bias whereas the latter have a large variance. In this work, we leverage the strength and overcome the weaknesses of the two approaches by applying the doubly robust technique to the problems of policy evaluation and optimization. We prove that this approach yields accurate value estimates when we have either a good (but not necessarily consistent) model of rewards or a good (but not necessarily consistent) model of past policy. Extensive empirical comparison demonstrates that the doubly robust approach uniformly improves over existing techniques, achieving both lower variance in value estimation and better policies. As such, we expect the doubly robust approach to become common practice.
Dudik Miroslav
Langford J. J.
Li Lihong
No associations
LandOfFree
Doubly Robust Policy Evaluation and Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Doubly Robust Policy Evaluation and Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Doubly Robust Policy Evaluation and Learning will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-441750