Computer Science – Learning
Scientific paper
2008-09-09
Computer Science
Learning
22 pages 2 figures
Scientific paper
We consider a general class of regularization methods which learn a vector of parameters on the basis of linear measurements. It is well known that if the regularizer is a nondecreasing function of the inner product then the learned vector is a linear combination of the input data. This result, known as the {\em representer theorem}, is at the basis of kernel-based methods in machine learning. In this paper, we prove the necessity of the above condition, thereby completing the characterization of kernel methods based on regularization. We further extend our analysis to regularization methods which learn a matrix, a problem which is motivated by the application to multi-task learning. In this context, we study a more general representer theorem, which holds for a larger class of regularizers. We provide a necessary and sufficient condition for these class of matrix regularizers and highlight them with some concrete examples of practical importance. Our analysis uses basic principles from matrix theory, especially the useful notion of matrix nondecreasing function.
Argyriou Andreas
Micchelli Charles
Pontil Massimiliano
No associations
LandOfFree
When is there a representer theorem? Vector versus matrix regularizers does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with When is there a representer theorem? Vector versus matrix regularizers, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and When is there a representer theorem? Vector versus matrix regularizers will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-211080