Mathematics – Statistics Theory
Scientific paper
2011-01-30
Mathematics
Statistics Theory
31 pages, 3 figures
Scientific paper
We derive an asymptotic expansion for the excess risk (regret) of a weighted nearest-neighbour classifier. This allows us to find the asymptotically optimal vector of non-negative weights, which has a rather simple form. We show that the ratio of the regret of this classifier to that of an unweighted $k$-nearest neighbour classifier depends asymptotically only on the dimension $d$ of the feature vectors, and not on the underlying populations. The improvement is greatest when $d=4$, but thereafter decreases as $d \rightarrow \infty$. The popular bagged nearest neighbour classifier can also be regarded as a weighted nearest neighbour classifier, and we show that its corresponding weights are somewhat suboptimal when $d$ is small (in particular, worse than those of the unweighted $k$-nearest neighbour classifier when $d=1$), but are close to optimal when $d$ is large. Finally, we argue that improvements in the rate of convergence are possible under stronger smoothness assumptions, provided we allow negative weights.
No associations
LandOfFree
Optimal weighted nearest neighbour classifiers does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Optimal weighted nearest neighbour classifiers, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Optimal weighted nearest neighbour classifiers will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-157899