Computer Science – Learning
Scientific paper
2012-02-17
Computer Science
Learning
25 LaTeX pages
Scientific paper
We study upper and lower bounds on the sample-complexity of learning near-optimal behaviour in finite-state discounted Markov Decision Processes (MDPs). For the upper bound we make the assumption that each action leads to at most two possible next-states and prove a new bound for a UCRL-style algorithm on the number of time-steps when it is not Probably Approximately Correct (PAC). The new lower bound strengthens previous work by being both more general (it applies to all policies) and tighter. The upper and lower bounds match up to logarithmic factors.
Hutter Marcus
Lattimore Tor
No associations
LandOfFree
PAC Bounds for Discounted MDPs does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with PAC Bounds for Discounted MDPs, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and PAC Bounds for Discounted MDPs will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-35642