Mathematics – Statistics Theory
Scientific paper
2005-05-16
Mathematics
Statistics Theory
29 pages; mai 2005
Scientific paper
We consider a recursive algorithm to construct an aggregated estimator from a finite number of base decision rules in the classification problem. The estimator approximately minimizes a convex risk functional under the l1-constraint. It is defined by a stochastic version of the mirror descent algorithm (i.e., of the method which performs gradient descent in the dual space) with an additional averaging. The main result of the paper is an upper bound for the expected accuracy of the proposed estimator. This bound is of the order $\sqrt{(\log M)/t}$ with an explicit and small constant factor, where $M$ is the dimension of the problem and $t$ stands for the sample size. A similar bound is proved for a more general setting that covers, in particular, the regression model with squared loss.
Juditsky Anatoli
Nazin Alexander
Tsybakov Alexandre
Vayatis Nicolas
No associations
LandOfFree
Recursive Aggregation of Estimators by Mirror Descent Algorithm with Averaging does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Recursive Aggregation of Estimators by Mirror Descent Algorithm with Averaging, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Recursive Aggregation of Estimators by Mirror Descent Algorithm with Averaging will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-271160