Dynamic Policy Programming

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Submitted to Journal of Machine Learning Research

Scientific paper

In this paper, we propose a novel policy iteration method, called dynamic policy programming (DPP), to estimate the optimal policy in the infinite-horizon Markov decision processes. We prove the finite-iteration and asymptotic l\infty-norm performance-loss bounds for DPP in the presence of approximation/estimation error. The bounds are expressed in terms of the l\infty-norm of the average accumulated error as opposed to the l\infty-norm of the error in the case of the standard approximate value iteration (AVI) and the approximate policy iteration (API). This suggests that DPP can achieve a better performance than AVI and API since it averages out the simulation noise caused by Monte-Carlo sampling throughout the learning process. We examine this theoretical results numerically by com- paring the performance of the approximate variants of DPP with existing reinforcement learning (RL) methods on different problem domains. Our results show that, in all cases, DPP-based algorithms outperform other RL methods by a wide margin.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Dynamic Policy Programming does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Dynamic Policy Programming, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Dynamic Policy Programming will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-263299

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.