Distributed Delayed Stochastic Optimization

Mathematics – Optimization and Control

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

27 pages, 4 figures

Scientific paper

We analyze the convergence of gradient-based optimization algorithms that base their updates on delayed stochastic gradient information. The main application of our results is to the development of gradient-based distributed optimization algorithms where a master node performs parameter updates while worker nodes compute stochastic gradients based on local information in parallel, which may give rise to delays due to asynchrony. We take motivation from statistical problems where the size of the data is so large that it cannot fit on one computer; with the advent of huge datasets in biology, astronomy, and the internet, such problems are now common. Our main contribution is to show that for smooth stochastic problems, the delays are asymptotically negligible and we can achieve order-optimal convergence results. In application to distributed optimization, we develop procedures that overcome communication bottlenecks and synchronization requirements. We show $n$-node architectures whose optimization error in stochastic problems---in spite of asynchronous delays---scales asymptotically as $\order(1 / \sqrt{nT})$ after $T$ iterations. This rate is known to be optimal for a distributed system with $n$ nodes even in the absence of delays. We additionally complement our theoretical results with numerical experiments on a statistical machine learning task.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Distributed Delayed Stochastic Optimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Distributed Delayed Stochastic Optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Distributed Delayed Stochastic Optimization will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-17032

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.