Computer Science – Learning
Scientific paper
2011-04-24
Computer Science
Learning
Scientific paper
Q-learning is a reliable but inefficient off-policy temporal-difference method, backing up reward only one step at a time. Replacing traces, using a recency heuristic, are more efficient but less reliable. In this work, we introduce model-free, off-policy temporal difference methods that make better use of experience than Watkins' Q(\lambda). We introduce both Optimistic Q(\lambda) and the temporal second difference trace (TSDT). TSDT is particularly powerful in deterministic domains. TSDT uses neither recency nor frequency heuristics, storing (s,a,r,s',\delta) so that off-policy updates can be performed after apparently suboptimal actions have been taken. There are additional advantages when using state abstraction, as in MAXQ. We demonstrate that TSDT does significantly better than both Q-learning and Watkins' Q(\lambda) in a deterministic cliff-walking domain. Results in a noisy cliff-walking domain are less advantageous for TSDT, but demonstrate the efficacy of Optimistic Q(\lambda), a replacing trace with some of the advantages of TSDT.
No associations
LandOfFree
Temporal Second Difference Traces does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Temporal Second Difference Traces, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Temporal Second Difference Traces will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-544443