Computer Science – Learning
Scientific paper
2010-12-07
Computer Science
Learning
Final version of paper to appear in Journal of Machine Learning Research (JMLR)
Scientific paper
Online prediction methods are typically presented as serial algorithms running on a single processor. However, in the age of web-scale prediction problems, it is increasingly common to encounter situations where a single processor cannot keep up with the high rate at which inputs arrive. In this work, we present the \emph{distributed mini-batch} algorithm, a method of converting many serial gradient-based online prediction algorithms into distributed algorithms. We prove a regret bound for this method that is asymptotically optimal for smooth convex loss functions and stochastic inputs. Moreover, our analysis explicitly takes into account communication latencies between nodes in the distributed environment. We show how our method can be used to solve the closely-related distributed stochastic optimization problem, achieving an asymptotically linear speed-up over multiple processors. Finally, we demonstrate the merits of our approach on a web-scale online prediction problem.
Dekel Ofer
Gilad-Bachrach Ran
Shamir Ohad
Xiao Lin
No associations
LandOfFree
Optimal Distributed Online Prediction using Mini-Batches does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Optimal Distributed Online Prediction using Mini-Batches, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Optimal Distributed Online Prediction using Mini-Batches will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-479432