Computer Science – Learning
Scientific paper
2011-09-12
Computer Science
Learning
Neural Information Processing Systems (2011)
Scientific paper
We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems.
Bach Francis
Roux Nicolas Le
Schmidt Mark
No associations
LandOfFree
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-59711