Mathematics – Optimization and Control
Scientific paper
2012-02-28
Mathematics
Optimization and Control
Scientific paper
We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. Numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms.
Bach Francis
Roux Nicolas Le
Schmidt Mark
No associations
LandOfFree
A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-611248