Robust Bayesian reinforcement learning through tight lower bounds

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Corrected version. 12 pages, 3 figures, 1 table

Scientific paper

In the Bayesian approach to sequential decision making, exact calculation of the (subjective) utility is intractable. This extends to most special cases of interest, such as reinforcement learning problems. While utility bounds are known to exist for this problem, so far none of them were particularly tight. In this paper, we show how to efficiently calculate a lower bound, which corresponds to the utility of a near-optimal memoryless policy for the decision problem, which is generally different from both the Bayes-optimal policy and the policy which is optimal for the expected MDP under the current belief. We then show how these can be applied to obtain robust exploration policies in a Bayesian reinforcement learning setting.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Robust Bayesian reinforcement learning through tight lower bounds does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Robust Bayesian reinforcement learning through tight lower bounds, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Robust Bayesian reinforcement learning through tight lower bounds will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-465741

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.