FPL Analysis for Adaptive Bandits

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

A main problem of "Follow the Perturbed Leader" strategies for online decision problems is that regret bounds are typically proven against oblivious adversary. In partial observation cases, it was not clear how to obtain performance guarantees against adaptive adversary, without worsening the bounds. We propose a conceptually simple argument to resolve this problem. Using this, a regret bound of O(t^(2/3)) for FPL in the adversarial multi-armed bandit problem is shown. This bound holds for the common FPL variant using only the observations from designated exploration rounds. Using all observations allows for the stronger bound of O(t^(1/2)), matching the best bound known so far (and essentially the known lower bound) for adversarial bandits. Surprisingly, this variant does not even need explicit exploration, it is self-stabilizing. However the sampling probabilities have to be either externally provided or approximated to sufficient accuracy, using O(t^2 log t) samples in each step.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

FPL Analysis for Adaptive Bandits does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with FPL Analysis for Adaptive Bandits, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and FPL Analysis for Adaptive Bandits will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-208642

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.