Computer Science – Learning
Scientific paper
2011-09-24
Computer Science
Learning
Scientific paper
In this paper we explore the problem of noise tolerant learning of classifiers. We formulate the problem as follows. We assume that there is an ${\bf unobservable}$ training set which is noise-free. The actual training set given to the learning algorithm is obtained from this ideal data set by corrupting the class label of each example where the probability that the class label on an example is corrupted is a function of the feature vector of the example. This would account for almost all kinds of noisy data one may encounter in practice. We say that a learning method is noise tolerant if the classifiers learnt with the ideal noise-free data and with noisy data have the same classification accuracy on the noise-free data. In this paper we analyze the noise tolerant properties of risk minimization, which is a generic method for learning classifiers. We consider different loss functions such as 0-1 loss, hinge loss, exponential loss, squared error loss etc. We show that the risk minimization under 0-1 loss function has very interesting noise tolerance properties and that under squared error loss is noise tolerant only when the noise is uniform. Risk minimization under other loss functions is not noise tolerant. We conclude the paper with some discussion on implications of these theoretical results for noise robust classifier design.
Manwani Naresh
Sastry P. S.
No associations
LandOfFree
Noise Tolerance under Risk Minimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Noise Tolerance under Risk Minimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Noise Tolerance under Risk Minimization will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-602387