Non-Sparse Regularization for Multiple Kernel Learning

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this 1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures, we generalize MKL to arbitrary norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary norms, like p-norms with p>1. Empirically, we demonstrate that the interleaved optimization strategies are much faster compared to the commonly used wrapper approaches. A theoretical analysis and an experiment on controlled artificial data experiment sheds light on the appropriateness of sparse, non-sparse and $\ell_\infty$-norm MKL in various scenarios. Empirical applications of p-norm MKL to three real-world problems from computational biology show that non-sparse MKL achieves accuracies that go beyond the state-of-the-art.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Non-Sparse Regularization for Multiple Kernel Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Non-Sparse Regularization for Multiple Kernel Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Non-Sparse Regularization for Multiple Kernel Learning will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-629527

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.