Mathematics – Optimization and Control
Scientific paper
2011-06-29
Mathematics
Optimization and Control
A preliminary version will appear in COLT 2011
Scientific paper
The AdaBoost algorithm was designed to combine many "weak" hypotheses that perform slightly better than random guessing into a "strong" hypothesis that has very low error. We study the rate at which AdaBoost iteratively converges to the minimum of the "exponential loss." Unlike previous work, our proofs do not require a weak-learning assumption, nor do they require that minimizers of the exponential loss are finite. Our first result shows that at iteration $t$, the exponential loss of AdaBoost's computed parameter vector will be at most $\epsilon$ more than that of any parameter vector of $\ell_1$-norm bounded by $B$ in a number of rounds that is at most a polynomial in $B$ and $1/\epsilon$. We also provide lower bounds showing that a polynomial dependence on these parameters is necessary. Our second result is that within $C/\epsilon$ iterations, AdaBoost achieves a value of the exponential loss that is at most $\epsilon$ more than the best possible value, where $C$ depends on the dataset. We show that this dependence of the rate on $\epsilon$ is optimal up to constant factors, i.e., at least $\Omega(1/\epsilon)$ rounds are necessary to achieve within $\epsilon$ of the optimal exponential loss.
Mukherjee Indraneel
Rudin Cynthia
Schapire Robert E.
No associations
LandOfFree
The Rate of Convergence of AdaBoost does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with The Rate of Convergence of AdaBoost, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and The Rate of Convergence of AdaBoost will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-360134