Computer Science – Learning
Scientific paper
2010-11-01
Computer Science
Learning
28 pages. Supplementary material for NIPS 2010 paper "Lower Bounds on Rate of Convergence of Cutting Plane Methods" by the sam
Scientific paper
Nesterov's accelerated gradient methods (AGM) have been successfully applied in many machine learning areas. However, their empirical performance on training max-margin models has been inferior to existing specialized solvers. In this paper, we first extend AGM to strongly convex and composite objective functions with Bregman style prox-functions. Our unifying framework covers both the $\infty$-memory and 1-memory styles of AGM, tunes the Lipschiz constant adaptively, and bounds the duality gap. Then we demonstrate various ways to apply this framework of methods to a wide range of machine learning problems. Emphasis will be given on their rate of convergence and how to efficiently compute the gradient and optimize the models. The experimental results show that with our extensions AGM outperforms state-of-the-art solvers on max-margin models.
Saha Ankan
Vishwanathan S. V. N.
Zhang Xinhua
No associations
LandOfFree
Regularized Risk Minimization by Nesterov's Accelerated Gradient Methods: Algorithmic Extensions and Empirical Studies does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Regularized Risk Minimization by Nesterov's Accelerated Gradient Methods: Algorithmic Extensions and Empirical Studies, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Regularized Risk Minimization by Nesterov's Accelerated Gradient Methods: Algorithmic Extensions and Empirical Studies will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-594197