Computer Science – Learning
Scientific paper
2011-08-03
Computer Science
Learning
Scientific paper
Sparse estimation methods are aimed at using or obtaining parsimonious representations of data or models. They were first dedicated to linear variable selection but numerous extensions have now emerged such as structured sparsity or kernel selection. It turns out that many of the related estimation problems can be cast as convex optimization problems by regularizing the empirical risk with appropriate non-smooth norms. The goal of this paper is to present from a general perspective optimization tools and techniques dedicated to such sparsity-inducing penalties. We cover proximal methods, block-coordinate descent, reweighted $\ell_2$-penalized techniques, working-set and homotopy methods, as well as non-convex formulations and extensions, and provide an extensive set of experiments to compare various algorithms from a computational point of view.
Bach Francis
Jenatton Rodolphe
Mairal Julien
Obozinski Guillaume
No associations
LandOfFree
Optimization with Sparsity-Inducing Penalties does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Optimization with Sparsity-Inducing Penalties, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Optimization with Sparsity-Inducing Penalties will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-663677