Online Algorithms for the Multi-Armed Bandit Problem with Markovian Rewards

Mathematics – Optimization and Control

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

We consider the classical multi-armed bandit problem with Markovian rewards. When played an arm changes its state in a Markovian fashion while it remains frozen when not played. The player receives a state-dependent reward each time it plays an arm. The number of states and the state transition probabilities of an arm are unknown to the player. The player's objective is to maximize its long-term total reward by learning the best arm over time. We show that under certain conditions on the state transition probabilities of the arms, a sample mean based index policy achieves logarithmic regret uniformly over the total number of trials. The result shows that sample mean based index policies can be applied to learning problems under the rested Markovian bandit model without loss of optimality in the order. Moreover, comparision between Anantharam's index policy and UCB shows that by choosing a small exploration parameter UCB can have a smaller regret than Anantharam's index policy.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Online Algorithms for the Multi-Armed Bandit Problem with Markovian Rewards does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Online Algorithms for the Multi-Armed Bandit Problem with Markovian Rewards, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Online Algorithms for the Multi-Armed Bandit Problem with Markovian Rewards will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-120154

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.