Variance-Based Rewards for Approximate Bayesian Reinforcement Learning

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)

Scientific paper

The explore{exploit dilemma is one of the central challenges in Reinforcement Learning (RL). Bayesian RL solves the dilemma by providing the agent with information in the form of a prior distribution over environments; however, full Bayesian planning is intractable. Planning with the mean MDP is a common myopic approximation of Bayesian planning. We derive a novel reward bonus that is a function of the posterior distribution over environments, which, when added to the reward in planning with the mean MDP, results in an agent which explores efficiently and effectively. Although our method is similar to existing methods when given an uninformative or unstructured prior, unlike existing methods, our method can exploit structured priors. We prove that our method results in a polynomial sample complexity and empirically demonstrate its advantages in a structured exploration task.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Variance-Based Rewards for Approximate Bayesian Reinforcement Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Variance-Based Rewards for Approximate Bayesian Reinforcement Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Variance-Based Rewards for Approximate Bayesian Reinforcement Learning will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-32356

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.