PAC Bounds for Discounted MDPs

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

25 LaTeX pages

Scientific paper

We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finite-state discounted Markov Decision Processes (MDPs). For the upper bound we make the assumption that each action leads to at most two possible next-states and prove a new bound for a UCRL-style algorithm on the number of time-steps when it is not Probably Approximately Correct (PAC). The new lower bound strengthens previous work by being both more general (it applies to all policies) and tighter. The upper and lower bounds match up to logarithmic factors.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

PAC Bounds for Discounted MDPs does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with PAC Bounds for Discounted MDPs, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and PAC Bounds for Discounted MDPs will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-35642

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.