Computer Science – Learning
Scientific paper
2009-02-19
Computer Science
Learning
12 pages, 4 figures
Scientific paper
This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by Ben-David et al. (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive novel generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularization-based algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give novel algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation.
Mansour Yishay
Mohri Mehryar
Rostamizadeh Afshin
No associations
LandOfFree
Domain Adaptation: Learning Bounds and Algorithms does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Domain Adaptation: Learning Bounds and Algorithms, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Domain Adaptation: Learning Bounds and Algorithms will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-260772