Mathematics – Optimization and Control
Scientific paper
2009-07-06
Mathematics
Optimization and Control
Scientific paper
The asymptotic behavior of stochastic gradient algorithms is studied. Relying on results from differential geometry (Lojasiewicz gradient inequality), the single limit-point convergence of the algorithm iterates is demonstrated and relatively tight bounds on the convergence rate are derived. In sharp contrast to the existing asymptotic results, the new results presented here do not require the objective function to have an isolated minimum and to be strongly convex in an open vicinity of that minimum. On the contrary, these new results allow the objective function to have multiple and non-isolated minima. They also offer new insights into the asymptotic properties of several classes of recursive algorithms which are routinely used in machine learning, statistics, engineering and operations research.
No associations
LandOfFree
Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-28037