Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima

Mathematics – Optimization and Control

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

The convergence rate of stochastic gradient search is analyzed in this paper. Using arguments based on differential geometry and Lojasiewicz inequalities, tight bounds on the convergence rate of general stochastic gradient algorithms are derived. As opposed to the existing results, the results presented in this paper allow the objective function to have multiple, non-isolated minima, impose no restriction on the values of the Hessian (of the objective function) and do not require the algorithm estimates to have a single limit point. Applying these new results, the convergence rate of recursive prediction error identification algorithms is studied. The convergence rate of supervised and temporal-difference learning algorithms is also analyzed using the results derived in the paper.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Minima will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-323300

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.