Computer Science – Learning
Scientific paper
2011-06-22
Computer Science
Learning
Scientific paper
Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problems. We study how such algorithms can be improved using accelerated gradient methods. We provide a novel analysis, which shows how standard gradient methods may sometimes be insufficient to obtain a significant speed-up and propose a novel accelerated gradient algorithm, which deals with this deficiency, enjoys a uniformly superior guarantee and works well in practice.
Cotter Andrew
Shamir Ohad
Srebro Nathan
Sridharan Karthik
No associations
LandOfFree
Better Mini-Batch Algorithms via Accelerated Gradient Methods does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Better Mini-Batch Algorithms via Accelerated Gradient Methods, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Better Mini-Batch Algorithms via Accelerated Gradient Methods will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-469010