Computer Science – Numerical Analysis
Scientific paper
2011-04-13
Computer Science
Numerical Analysis
26 pages. Expanded Section 2.3 (Relaxing strong convexity)
Scientific paper
Many structured data-fitting applications require the solution of an optimization problem involving a sum over a potentially large number of measurements. Incremental gradient algorithms offer inexpensive iterations by sampling a subset of the terms in the sum. These methods can make great progress initially, but often slow as they approach a solution. In contrast, full-gradient methods achieve steady convergence at the expense of evaluating the full objective and gradient on each iteration. We explore hybrid methods that exhibit the benefits of both approaches. Rate-of-convergence analysis shows that by controlling the sample size in an incremental gradient algorithm, it is possible to maintain the steady convergence rates of full-gradient methods. We detail a practical quasi-Newton implementation based on this approach. Numerical experiments illustrate its potential benefits.
Friedlander Michael P.
Schmidt Mark
No associations
LandOfFree
Hybrid Deterministic-Stochastic Methods for Data Fitting does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Hybrid Deterministic-Stochastic Methods for Data Fitting, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Hybrid Deterministic-Stochastic Methods for Data Fitting will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-217706