On Stochastic Gradient and Subgradient Methods with Adaptive Steplength Sequences

Mathematics – Optimization and Control

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

The performance of standard stochastic approximation implementations can vary significantly based on the choice of the steplength sequence, and in general, little guidance is provided about good choices. Motivated by this gap, in the first part of the paper, we present two adaptive steplength schemes for strongly convex differentiable stochastic optimization problems, equipped with convergence theory. The first scheme, referred to as a recursive steplength stochastic approximation scheme, optimizes the error bounds to derive a rule that expresses the steplength at a given iteration as a simple function of the steplength at the previous iteration and certain problem parameters. This rule is seen to lead to the optimal steplength sequence over a prescribed set of choices. The second scheme, termed as a cascading steplength stochastic approximation scheme, maintains the steplength sequence as a piecewise-constant decreasing function with the reduction in the steplength occurring when a suitable error threshold is met. In the second part of the paper, we allow for nondifferentiable objective and we propose a local smoothing technique that leads to a differentiable approximation of the function. Assuming a uniform distribution on the local randomness, we establish a Lipschitzian property for the gradient of the approximation and prove that the obtained Lipschitz bound grows at a modest rate with problem size. This facilitates the development of an adaptive steplength stochastic approximation framework, which now requires sampling in the product space of the original measure and the artificially introduced distribution. The resulting adaptive steplength schemes are applied to three stochastic optimization problems. We observe that both schemes perform well in practice and display markedly less reliance on user-defined parameters.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

On Stochastic Gradient and Subgradient Methods with Adaptive Steplength Sequences does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with On Stochastic Gradient and Subgradient Methods with Adaptive Steplength Sequences, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and On Stochastic Gradient and Subgradient Methods with Adaptive Steplength Sequences will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-22884

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.