Learning from Scarce Experience

Computer Science – Artificial Intelligence

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

8 pages 4 figures

Scientific paper

Searching the space of policies directly for the optimal policy has been one popular method for solving partially observable reinforcement learning problems. Typically, with each change of the target policy, its value is estimated from the results of following that very policy. This requires a large number of interactions with the environment as different polices are considered. We present a family of algorithms based on likelihood ratio estimation that use data gathered when executing one policy (or collection of policies) to estimate the value of a different policy. The algorithms combine estimation and optimization stages. The former utilizes experience to build a non-parametric representation of an optimized function. The latter performs optimization on this estimate. We show positive empirical results and provide the sample complexity bound.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Learning from Scarce Experience does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Learning from Scarce Experience, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Learning from Scarce Experience will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-280577

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.