Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Neural Information Processing Systems (2011)

Scientific paper

We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-59711

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.