Domain Adaptation: Overfitting and Small Sample Statistics

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

11 pages

Scientific paper

We study the prevalent problem when a test distribution differs from the training distribution. We consider a setting where our training set consists of a small number of sample domains, but where we have many samples in each domain. Our goal is to generalize to a new domain. For example, we may want to learn a similarity function using only certain classes of objects, but we desire that this similarity function be applicable to object classes not present in our training sample (e.g. we might seek to learn that "dogs are similar to dogs" even though images of dogs were absent from our training set). Our theoretical analysis shows that we can select many more features than domains while avoiding overfitting by utilizing data-dependent variance properties. We present a greedy feature selection algorithm based on using T-statistics. Our experiments validate this theory showing that our T-statistic based greedy feature selection is more robust at avoiding overfitting than the classical greedy procedure.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Domain Adaptation: Overfitting and Small Sample Statistics does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Domain Adaptation: Overfitting and Small Sample Statistics, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Domain Adaptation: Overfitting and Small Sample Statistics will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-689610

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.