Reinforcement Learning with Linear Function Approximation and LQ control Converges

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

9 pages

Scientific paper

Reinforcement learning is commonly used with function approximation. However, very few positive results are known about the convergence of function approximation based RL control algorithms. In this paper we show that TD(0) and Sarsa(0) with linear function approximation is convergent for a simple class of problems, where the system is linear and the costs are quadratic (the LQ control problem). Furthermore, we show that for systems with Gaussian noise and non-completely observable states (the LQG problem), the mentioned RL algorithms are still convergent, if they are combined with Kalman filtering.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Reinforcement Learning with Linear Function Approximation and LQ control Converges does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Reinforcement Learning with Linear Function Approximation and LQ control Converges, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Reinforcement Learning with Linear Function Approximation and LQ control Converges will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-154114

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.