Computer Science – Learning
Scientific paper
2012-03-15
Computer Science
Learning
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
Scientific paper
The explore{exploit dilemma is one of the central challenges in Reinforcement Learning (RL). Bayesian RL solves the dilemma by providing the agent with information in the form of a prior distribution over environments; however, full Bayesian planning is intractable. Planning with the mean MDP is a common myopic approximation of Bayesian planning. We derive a novel reward bonus that is a function of the posterior distribution over environments, which, when added to the reward in planning with the mean MDP, results in an agent which explores efficiently and effectively. Although our method is similar to existing methods when given an uninformative or unstructured prior, unlike existing methods, our method can exploit structured priors. We prove that our method results in a polynomial sample complexity and empirically demonstrate its advantages in a structured exploration task.
Lewis Richard L.
Singh Satinder
Sorg Jonathan
No associations
LandOfFree
Variance-Based Rewards for Approximate Bayesian Reinforcement Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Variance-Based Rewards for Approximate Bayesian Reinforcement Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Variance-Based Rewards for Approximate Bayesian Reinforcement Learning will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-32356