Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes

Computer Science – Artificial Intelligence

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

10.1613/jair.1700

We study an approach to policy selection for large relational Markov Decision Processes (MDPs). We consider a variant of approximate policy iteration (API) that replaces the usual value-function learning step with a learning step in policy space. This is advantageous in domains where good policies are easier to represent and learn than the corresponding value functions, which is often the case for the relational MDPs we are interested in. In order to apply API to such problems, we introduce a relational policy language and corresponding learner. In addition, we introduce a new bootstrapping routine for goal-based planning domains, based on random walks. Such bootstrapping is necessary for many large relational MDPs, where reward is extremely sparse, as API is ineffective in such domains when initialized with an uninformed policy. Our experiments show that the resulting system is able to find good policies for a number of classical planning domains and their stochastic variants by solving them as extremely large relational MDPs. The experiments also point to some limitations of our approach, suggesting future work.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Approximate Policy Iteration with a Policy Language Bias: Solving Relational Markov Decision Processes will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-38189

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.