Optimism in Reinforcement Learning and Kullback-Leibler Divergence

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

This work has been accepted and presented at ALLERTON 2010; Communication, Control, and Computing (Allerton), 2010 48th Annual

Scientific paper

10.1109/ALLERTON.2010.5706896

We consider model-based reinforcement learning in finite Markov De- cision Processes (MDPs), focussing on so-called optimistic strategies. In MDPs, optimism can be implemented by carrying out extended value it- erations under a constraint of consistency with the estimated model tran- sition probabilities. The UCRL2 algorithm by Auer, Jaksch and Ortner (2009), which follows this strategy, has recently been shown to guarantee near-optimal regret bounds. In this paper, we strongly argue in favor of using the Kullback-Leibler (KL) divergence for this purpose. By studying the linear maximization problem under KL constraints, we provide an ef- ficient algorithm, termed KL-UCRL, for solving KL-optimistic extended value iteration. Using recent deviation bounds on the KL divergence, we prove that KL-UCRL provides the same guarantees as UCRL2 in terms of regret. However, numerical experiments on classical benchmarks show a significantly improved behavior, particularly when the MDP has reduced connectivity. To support this observation, we provide elements of com- parison between the two algorithms based on geometric considerations.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Optimism in Reinforcement Learning and Kullback-Leibler Divergence does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Optimism in Reinforcement Learning and Kullback-Leibler Divergence, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Optimism in Reinforcement Learning and Kullback-Leibler Divergence will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-284747

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.