Randomized Smoothing for Stochastic Optimization

Mathematics – Optimization and Control

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

39 pages, 3 figures

Scientific paper

We analyze convergence rates of stochastic optimization procedures for non-smooth convex optimization problems. By combining randomized smoothing techniques with accelerated gradient methods, we obtain convergence rates of stochastic optimization procedures, both in expectation and with high probability, that have optimal dependence on the variance of the gradient estimates. To the best of our knowledge, these are the first variance-based rates for non-smooth optimization. We give several applications of our results to statistical estimation problems, and provide experimental results that demonstrate the effectiveness of the proposed algorithms. We also describe how a combination of our algorithm with recent work on decentralized optimization yields a distributed stochastic optimization algorithm that is order-optimal.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Randomized Smoothing for Stochastic Optimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Randomized Smoothing for Stochastic Optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Randomized Smoothing for Stochastic Optimization will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-49263

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.