Robustness of Anytime Bandit Policies

Statistics – Machine Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

This paper studies the deviations of the regret in a stochastic multi-armed bandit problem. When the total number of plays n is known beforehand by the agent, Audibert et al. (2009) exhibit a policy such that with probability at least 1-1/n, the regret of the policy is of order log(n). They have also shown that such a property is not shared by the popular ucb1 policy of Auer et al. (2002). This work first answers an open question: it extends this negative result to any anytime policy. The second contribution of this paper is to design anytime robust policies for specific multi-armed bandit problems in which some restrictions are put on the set of possible distributions of the different arms.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Robustness of Anytime Bandit Policies does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Robustness of Anytime Bandit Policies, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Robustness of Anytime Bandit Policies will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-676511

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.