Computer Science – Learning
Scientific paper
2002-12-11
Journal of Experimental and Theoretical Artificial Intelligence, (1994), 6, 331-360
Computer Science
Learning
48 pages
Scientific paper
This paper begins with a general theory of error in cross-validation testing of algorithms for supervised learning from examples. It is assumed that the examples are described by attribute-value pairs, where the values are symbolic. Cross-validation requires a set of training examples and a set of testing examples. The value of the attribute that is to be predicted is known to the learner in the training set, but unknown in the testing set. The theory demonstrates that cross-validation error has two components: error on the training set (inaccuracy) and sensitivity to noise (instability). This general theory is then applied to voting in instance-based learning. Given an example in the testing set, a typical instance-based learning algorithm predicts the designated attribute by voting among the k nearest neighbors (the k most similar examples) to the testing example in the training set. Voting is intended to increase the stability (resistance to noise) of instance-based learning, but a theoretical analysis shows that there are circumstances in which voting can be destabilizing. The theory suggests ways to minimize cross-validation error, by insuring that voting is stable and does not adversely affect accuracy.
No associations
LandOfFree
Theoretical Analyses of Cross-Validation Error and Voting in Instance-Based Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Theoretical Analyses of Cross-Validation Error and Voting in Instance-Based Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Theoretical Analyses of Cross-Validation Error and Voting in Instance-Based Learning will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-637967