Model Selection with the Loss Rank Principle

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

31 LaTeX pages, 1 figure

Scientific paper

A key issue in statistics and machine learning is to automatically select the "right" model complexity, e.g., the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. We suggest a novel principle - the Loss Rank Principle (LoRP) - for model selection in regression and classification. It is based on the loss rank, which counts how many other (fictitious) data would be fitted better. LoRP selects the model that has minimal loss rank. Unlike most penalized maximum likelihood variants (AIC, BIC, MDL), LoRP depends only on the regression functions and the loss function. It works without a stochastic noise model, and is directly applicable to any non-parametric regressor, like kNN.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Model Selection with the Loss Rank Principle does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Model Selection with the Loss Rank Principle, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Model Selection with the Loss Rank Principle will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-663597

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.