Computer Science – Learning
Scientific paper
2011-04-27
Computer Science
Learning
Scientific paper
In experimenting with off-policy temporal difference (TD) methods in hierarchical reinforcement learning (HRL) systems, we have observed unwanted on-policy learning under reproducible conditions. Here we present modifications to several TD methods that prevent unintentional on-policy learning from occurring. These modifications create a tension between exploration and learning. Traditional TD methods require commitment to finishing subtasks without exploration in order to update Q-values for early actions with high probability. One-step intra-option learning and temporal second difference traces (TSDT) do not suffer from this limitation. We demonstrate that our HRL system is efficient without commitment to completion of subtasks in a cliff-walking domain, contrary to a widespread claim in the literature that it is critical for efficiency of learning. Furthermore, decreasing commitment as exploration progresses is shown to improve both online performance and the resultant policy in the taxicab domain, opening a new avenue for research into when it is more beneficial to continue with the current subtask or to replan.
No associations
LandOfFree
Reducing Commitment to Tasks with Off-Policy Hierarchical Reinforcement Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Reducing Commitment to Tasks with Off-Policy Hierarchical Reinforcement Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Reducing Commitment to Tasks with Off-Policy Hierarchical Reinforcement Learning will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-474920