Temporal Difference Updating without a Learning Rate

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

12 pages, 6 figures

Scientific paper

We derive an equation for temporal difference learning from statistical principles. Specifically, we start with the variational principle and then bootstrap to produce an updating rule for discounted state value estimates. The resulting equation is similar to the standard equation for temporal difference learning with eligibility traces, so called TD(lambda), however it lacks the parameter alpha that specifies the learning rate. In the place of this free parameter there is now an equation for the learning rate that is specific to each state transition. We experimentally test this new learning rule against TD(lambda) and find that it offers superior performance in various settings. Finally, we make some preliminary investigations into how to extend our new temporal difference algorithm to reinforcement learning. To do this we combine our update equation with both Watkins' Q(lambda) and Sarsa(lambda) and find that it again offers superior performance without a learning rate parameter.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Temporal Difference Updating without a Learning Rate does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Temporal Difference Updating without a Learning Rate, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Temporal Difference Updating without a Learning Rate will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-456754

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.