Computer Science – Learning
Scientific paper
2011-05-26
In the 28th International Conference on Machine Learning, July 2011, Washington, USA
Computer Science
Learning
Scientific paper
We propose Shotgun, a parallel coordinate descent algorithm for minimizing L1-regularized losses. Though coordinate descent seems inherently sequential, we prove convergence bounds for Shotgun which predict linear speedups, up to a problem-dependent limit. We present a comprehensive empirical study of Shotgun for Lasso and sparse logistic regression. Our theoretical predictions on the potential for parallelism closely match behavior on real data. Shotgun outperforms other published solvers on a range of large problems, proving to be one of the most scalable algorithms for L1.
Bickson Danny
Bradley Joseph K.
Guestrin Carlos
Kyrola Aapo
No associations
LandOfFree
Parallel Coordinate Descent for L1-Regularized Loss Minimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Parallel Coordinate Descent for L1-Regularized Loss Minimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Parallel Coordinate Descent for L1-Regularized Loss Minimization will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-218504