Tree Exploration for Bayesian RL Exploration

Statistics – Machine Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

13 pages, 1 figure. Slightly extended and corrected version (notation errors and lower bound calculation) of homonymous paper

Scientific paper

Research in reinforcement learning has produced algorithms for optimal decision making under uncertainty that fall within two main types. The first employs a Bayesian framework, where optimality improves with increased computational time. This is because the resulting planning task takes the form of a dynamic programming problem on a belief tree with an infinite number of states. The second type employs relatively simple algorithm which are shown to suffer small regret within a distribution-free framework. This paper presents a lower bound and a high probability upper bound on the optimal value function for the nodes in the Bayesian belief tree, which are analogous to similar bounds in POMDPs. The bounds are then used to create more efficient strategies for exploring the tree. The resulting algorithms are compared with the distribution-free algorithm UCB1, as well as a simpler baseline algorithm on multi-armed bandit problems.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Tree Exploration for Bayesian RL Exploration does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Tree Exploration for Bayesian RL Exploration, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Tree Exploration for Bayesian RL Exploration will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-296440

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.