State Abstraction in MAXQ Hierarchical Reinforcement Learning

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

7 pages, 2 figures

Scientific paper

Many researchers have explored methods for hierarchical reinforcement learning (RL) with temporal abstractions, in which abstract actions are defined that can perform many primitive actions before terminating. However, little is known about learning with state abstractions, in which aspects of the state space are ignored. In previous work, we developed the MAXQ method for hierarchical RL. In this paper, we define five conditions under which state abstraction can be combined with the MAXQ value function decomposition. We prove that the MAXQ-Q learning algorithm converges under these conditions and show experimentally that state abstraction is important for the successful application of MAXQ-Q learning.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

State Abstraction in MAXQ Hierarchical Reinforcement Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with State Abstraction in MAXQ Hierarchical Reinforcement Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and State Abstraction in MAXQ Hierarchical Reinforcement Learning will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-80491

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.