Computer Science – Learning
Scientific paper
2012-02-20
Computer Science
Learning
Scientific paper
We present a new bandit algorithm, SAO (Stochastic and Adversarial Optimal), whose regret is, essentially, optimal both for adversarial rewards and for stochastic rewards. Specifically, SAO combines the square-root worst-case regret of Exp3 (Auer et al., SIAM J. on Computing 2002) and the (poly)logarithmic regret of UCB1 (Auer et al., Machine Learning 2002) for stochastic rewards. Adversarial rewards and stochastic rewards are the two main settings in the literature on (non-Bayesian) multi-armed bandits. Prior work on multi-armed bandits treats them separately, and does not attempt to jointly optimize for both. Our result falls into a general theme of achieving good worst-case performance while also taking advantage of "nice" problem instances, an important issue in the design of algorithms with partially known inputs.
Bubeck Sébastien
Slivkins Aleksandrs
No associations
LandOfFree
The best of both worlds: stochastic and adversarial bandits does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with The best of both worlds: stochastic and adversarial bandits, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and The best of both worlds: stochastic and adversarial bandits will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-422083