Learning Kernel-Based Halfspaces with the Zero-One Loss

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

This is a full version of the paper appearing in the 23rd International Conference on Learning Theory (COLT 2010). Compared to

Scientific paper

We describe and analyze a new algorithm for agnostically learning kernel-based halfspaces with respect to the \emph{zero-one} loss function. Unlike most previous formulations which rely on surrogate convex loss functions (e.g. hinge-loss in SVM and log-loss in logistic regression), we provide finite time/sample guarantees with respect to the more natural zero-one loss function. The proposed algorithm can learn kernel-based halfspaces in worst-case time $\poly(\exp(L\log(L/\epsilon)))$, for $\emph{any}$ distribution, where $L$ is a Lipschitz constant (which can be thought of as the reciprocal of the margin), and the learned classifier is worse than the optimal halfspace by at most $\epsilon$. We also prove a hardness result, showing that under a certain cryptographic assumption, no algorithm can learn kernel-based halfspaces in time polynomial in $L$.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Learning Kernel-Based Halfspaces with the Zero-One Loss does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Learning Kernel-Based Halfspaces with the Zero-One Loss, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Learning Kernel-Based Halfspaces with the Zero-One Loss will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-210620

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.