Temporal Second Difference Traces

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

Q-learning is a reliable but inefficient off-policy temporal-difference method, backing up reward only one step at a time. Replacing traces, using a recency heuristic, are more efficient but less reliable. In this work, we introduce model-free, off-policy temporal difference methods that make better use of experience than Watkins' Q(\lambda). We introduce both Optimistic Q(\lambda) and the temporal second difference trace (TSDT). TSDT is particularly powerful in deterministic domains. TSDT uses neither recency nor frequency heuristics, storing (s,a,r,s',\delta) so that off-policy updates can be performed after apparently suboptimal actions have been taken. There are additional advantages when using state abstraction, as in MAXQ. We demonstrate that TSDT does significantly better than both Q-learning and Watkins' Q(\lambda) in a deterministic cliff-walking domain. Results in a noisy cliff-walking domain are less advantageous for TSDT, but demonstrate the efficacy of Optimistic Q(\lambda), a replacing trace with some of the advantages of TSDT.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Temporal Second Difference Traces does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Temporal Second Difference Traces, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Temporal Second Difference Traces will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-544443

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.