New Generalization Bounds for Learning Kernels

Computer Science – Artificial Intelligence

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

This paper presents several novel generalization bounds for the problem of learning kernels based on the analysis of the Rademacher complexity of the corresponding hypothesis sets. Our bound for learning kernels with a convex combination of p base kernels has only a log(p) dependency on the number of kernels, p, which is considerably more favorable than the previous best bound given for the same problem. We also give a novel bound for learning with a linear combination of p base kernels with an L_2 regularization whose dependency on p is only in p^{1/4}.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

New Generalization Bounds for Learning Kernels does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with New Generalization Bounds for Learning Kernels, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and New Generalization Bounds for Learning Kernels will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-684916

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.