Computer Science – Artificial Intelligence
Scientific paper
2010-08-30
Computer Science
Artificial Intelligence
This paper has been withdrawn by the author
Scientific paper
Consideration of the primal and dual problems together leads to important new insights into the characteristics of boosting algorithms. In this work, we propose a general framework that can be used to design new boosting algorithms. A wide variety of machine learning problems essentially minimize a regularized risk functional. We show that the proposed boosting framework, termed CGBoost, can accommodate various loss functions and different regularizers in a totally-corrective optimization fashion. We show that, by solving the primal rather than the dual, a large body of totally-corrective boosting algorithms can actually be efficiently solved and no sophisticated convex optimization solvers are needed. We also demonstrate that some boosting algorithms like AdaBoost can be interpreted in our framework--even their optimization is not totally corrective. We empirically show that various boosting algorithms based on the proposed framework perform similarly on the UCIrvine machine learning datasets [1] that we have used in the experiments.
Barnes Nick
Li Hanxi
Shen Chunhua
No associations
LandOfFree
Totally Corrective Boosting for Regularized Risk Minimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Totally Corrective Boosting for Regularized Risk Minimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Totally Corrective Boosting for Regularized Risk Minimization will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-179362