Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Submitted to Foundations and Trends in Machine Learning

Scientific paper

Multi-armed bandit problems are the most basic examples of sequential decision problems with an exploration-exploitation trade-off. This is the balance between staying with the option that gave highest payoffs in the past and exploring new options that might give higher payoffs in the future. Although the study of bandit problems dates back to the Thirties, exploration-exploitation trade-offs arise in several modern applications, such as ad placement, website optimization, and packet routing. Mathematically, a multi-armed bandit is defined by the payoff process associated with each option. In this survey, we focus on two extreme cases in which the analysis of regret is particularly simple and elegant: i.i.d. payoffs and adversarial payoffs. Besides the basic setting of finitely many actions, we also analyze some of the most important variants and extensions, such as the contextual bandit model.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-140687

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.