Bounds on sample size for policy evaluation in Markov environments

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

14 pages

Scientific paper

Reinforcement learning means finding the optimal course of action in Markovian environments without knowledge of the environment's dynamics. Stochastic optimization algorithms used in the field rely on estimates of the value of a policy. Typically, the value of a policy is estimated from results of simulating that very policy in the environment. This approach requires a large amount of simulation as different points in the policy space are considered. In this paper, we develop value estimators that utilize data gathered when using one policy to estimate the value of using another policy, resulting in much more data-efficient algorithms. We consider the question of accumulating a sufficient experience and give PAC-style bounds.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Bounds on sample size for policy evaluation in Markov environments does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Bounds on sample size for policy evaluation in Markov environments, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Bounds on sample size for policy evaluation in Markov environments will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-184622

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.