Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

For large scale learning problems, it is desirable if we can obtain the optimal model parameters by going through the data in only one pass. Polyak and Juditsky (1992) showed that asymptotically the test performance of the simple average of the parameters obtained by stochastic gradient descent (SGD) is as good as that of the parameters which minimize the empirical cost. However, to our knowledge, despite its optimal asymptotic convergence rate, averaged SGD (ASGD) received little attention in recent research on large scale learning. One possible reason is that it may take a prohibitively large number of training samples for ASGD to reach its asymptotic region for most real problems. In this paper, we present a finite sample analysis for the method of Polyak and Juditsky (1992). Our analysis shows that it indeed usually takes a huge number of samples for ASGD to reach its asymptotic region for improperly chosen learning rate. More importantly, based on our analysis, we propose a simple way to properly set learning rate so that it takes a reasonable amount of data for ASGD to reach its asymptotic region. We compare ASGD using our proposed learning rate with other well known algorithms for training large scale linear classifiers. The experiments clearly show the superiority of ASGD.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Towards Optimal One Pass Large Scale Learning with Averaged Stochastic Gradient Descent will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-165324

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.