Statistics – Machine Learning
Scientific paper
2011-07-21
Statistics
Machine Learning
minor changes made; added vector case & fixed up proofs in appendices
Scientific paper
We investigate multi-task learning from an output space regularization perspective. Most multi-task approaches tie together related tasks by constraining them to share input spaces and function classes. In contrast to this, we propose a multi-task paradigm which we call output space regularization, in which the only constraint is that the output spaces of the multiple tasks are related. We focus on a specific instance of output space regularization, multi-task averaging, that is both widely applicable and amenable to analysis. The multi-task averaging estimator improves on the single-task sample average under certain conditions, which we detail. Our analysis shows that for a simple case the optimal similarity depends on the ratio of the task variance to the task differences, but that for more complicated cases the optimal similarity behaves non-linearly. Further, we show that the estimates produced are a convex combination of the tasks' sample averages. We discuss the Bayesian viewpoint. Three applications of multi-task output space regularization are presented: multi-task kernel density estimation, multi-task-regularized empirical moment constraints in similarity discriminant analysis, and multi-task local linear regression. Experiments on real data sets show statistically significant gains.
Cazzanti Luca
Feldman Sergey
Frigyik Bela A.
Gupta Maya R.
Sadowski Peter
No associations
LandOfFree
Multi-Task Output Space Regularization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Multi-Task Output Space Regularization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Multi-Task Output Space Regularization will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-675708