Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

For supervised and unsupervised learning, positive definite kernels allow to use large and potentially infinite dimensional feature spaces with a computational cost that only depends on the number of observations. This is usually done through the penalization of predictor functions by Euclidean or Hilbertian norms. In this paper, we explore penalizing by sparsity-inducing norms such as the l1-norm or the block l1-norm. We assume that the kernel decomposes into a large sum of individual basis kernels which can be embedded in a directed acyclic graph; we show that it is then possible to perform kernel selection through a hierarchical multiple kernel learning framework, in polynomial time in the number of selected kernels. This framework is naturally applied to non linear variable selection; our extensive simulations on synthetic datasets and datasets from the UCI repository show that efficiently exploring the large feature space through sparsity-inducing norms leads to state-of-the-art predictive performance.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-210406

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.