Computer Science – Learning
Scientific paper
2011-05-10
Computer Science
Learning
Extended version of paper presented at the International Conference on Machine Learning, 2011. 9 pages + appendix with proofs
Scientific paper
Boosting is a popular way to derive powerful learners from simpler hypothesis classes. Following previous work (Mason et al., 1999; Friedman, 2000) on general boosting frameworks, we analyze gradient-based descent algorithms for boosting with respect to any convex objective and introduce a new measure of weak learner performance into this setting which generalizes existing work. We present the weak to strong learning guarantees for the existing gradient boosting work for strongly-smooth, strongly-convex objectives under this new measure of performance, and also demonstrate that this work fails for non-smooth objectives. To address this issue, we present new algorithms which extend this boosting approach to arbitrary convex loss functions and give corresponding weak to strong convergence results. In addition, we demonstrate experimental results that support our analysis and demonstrate the need for the new algorithms we present.
Bagnell Andrew J.
Grubb Alexander
No associations
LandOfFree
Generalized Boosting Algorithms for Convex Optimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Generalized Boosting Algorithms for Convex Optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Generalized Boosting Algorithms for Convex Optimization will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-610798