Mathematics – Statistics Theory
Scientific paper
2009-09-10
Mathematics
Statistics Theory
Advances in Neural Information Processing Systems (NIPS 2009), Vancouver : Canada (2009)
Scientific paper
This paper tackles the problem of selecting among several linear estimators in non-parametric regression; this includes model selection for linear regression, the choice of a regularization parameter in kernel ridge regression, spline smoothing or locally weighted regression, and the choice of a kernel in multiple kernel learning. We propose a new algorithm which first estimates consistently the variance of the noise, based upon the concept of minimal penalty, which was previously introduced in the context of model selection. Then, plugging our variance estimate in Mallows' $C_L$ penalty is proved to lead to an algorithm satisfying an oracle inequality. Simulation experiments with kernel ridge regression and multiple kernel learning show that the proposed algorithm often improves significantly existing calibration procedures such as generalized cross-validation.
Arlot Sylvain
Bach Francis
No associations
LandOfFree
Data-driven calibration of linear estimators with minimal penalties does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Data-driven calibration of linear estimators with minimal penalties, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data-driven calibration of linear estimators with minimal penalties will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-131289