Smooth Optimization with Approximate Gradient

Mathematics – Optimization and Control

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Titled changed from "Smooth Optimization for Sparse Semidefinite Programs". New figures, tests. Final version

Scientific paper

We show that the optimal complexity of Nesterov's smooth first-order optimization algorithm is preserved when the gradient is only computed up to a small, uniformly bounded error. In applications of this method to semidefinite programs, this means in some instances computing only a few leading eigenvalues of the current iterate instead of a full matrix exponential, which significantly reduces the method's computational cost. This also allows sparse problems to be solved efficiently using sparse maximum eigenvalue packages.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Smooth Optimization with Approximate Gradient does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Smooth Optimization with Approximate Gradient, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Smooth Optimization with Approximate Gradient will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-168775

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.