Computer Science – Learning
Scientific paper
2011-07-22
Computer Science
Learning
Scientific paper
This paper gives specific divergence examples of value-iteration for several major Reinforcement Learning and Adaptive Dynamic Programming algorithms, when using a function approximator for the value function. These divergence examples differ from previous divergence examples in the literature, in that they are applicable for a greedy policy, i.e. in a "value iteration" scenario. Perhaps surprisingly, with a greedy policy, it is also possible to get divergence for the algorithms TD(1) and Sarsa(1). In addition to these divergences, we also achieve divergence for the Adaptive Dynamic Programming algorithms HDP, DHP and GDHP.
Alonso Eduardo
Fairbank Michael
No associations
LandOfFree
The Divergence of Reinforcement Learning Algorithms with Value-Iteration and Function Approximation does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with The Divergence of Reinforcement Learning Algorithms with Value-Iteration and Function Approximation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and The Divergence of Reinforcement Learning Algorithms with Value-Iteration and Function Approximation will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-677383