Approximate Stochastic Subgradient Estimation Training for Support Vector Machines

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

An extended version of the ICPRAM 2012 paper

Scientific paper

Subgradient algorithms for training support vector machines have been quite successful for solving large-scale and online learning problems. However, they have been restricted to linear kernels and strongly convex formulations. This paper describes efficient subgradient approaches without such limitations. Our approaches make use of randomized low-dimensional approximations to nonlinear kernels, and minimization of a reduced primal formulation using an algorithm based on robust stochastic approximation, which do not require strong convexity. Experiments illustrate that our approaches produce solutions of comparable prediction accuracy with the solutions acquired from existing SVM solvers, but often in much shorter time. We also suggest efficient prediction schemes that depend only on the dimension of kernel approximation, not on the number of support vectors.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Approximate Stochastic Subgradient Estimation Training for Support Vector Machines does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Approximate Stochastic Subgradient Estimation Training for Support Vector Machines, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Approximate Stochastic Subgradient Estimation Training for Support Vector Machines will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-327793

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.