Computer Science – Learning
Scientific paper
2008-10-31
Advances in Neural Information Processing Systems 20 (NIPS 2008) pages 705-712
Computer Science
Learning
12 pages, 6 figures
Scientific paper
We derive an equation for temporal difference learning from statistical principles. Specifically, we start with the variational principle and then bootstrap to produce an updating rule for discounted state value estimates. The resulting equation is similar to the standard equation for temporal difference learning with eligibility traces, so called TD(lambda), however it lacks the parameter alpha that specifies the learning rate. In the place of this free parameter there is now an equation for the learning rate that is specific to each state transition. We experimentally test this new learning rule against TD(lambda) and find that it offers superior performance in various settings. Finally, we make some preliminary investigations into how to extend our new temporal difference algorithm to reinforcement learning. To do this we combine our update equation with both Watkins' Q(lambda) and Sarsa(lambda) and find that it again offers superior performance without a learning rate parameter.
Hutter Marcus
Legg Shane
No associations
LandOfFree
Temporal Difference Updating without a Learning Rate does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Temporal Difference Updating without a Learning Rate, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Temporal Difference Updating without a Learning Rate will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-456754