Rapid Learning with Stochastic Focus of Attention

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

We present a method to stop the evaluation of a decision making process when the result of the full evaluation is obvious. This trait is highly desirable for online margin-based machine learning algorithms where a classifier traditionally evaluates all the features for every example. We observe that some examples are easier to classify than others, a phenomenon which is characterized by the event when most of the features agree on the class of an example. By stopping the feature evaluation when encountering an easy to classify example, the learning algorithm can achieve substantial gains in computation. Our method provides a natural attention mechanism for learning algorithms. By modifying Pegasos, a margin-based online learning algorithm, to include our attentive method we lower the number of attributes computed from $n$ to an average of $O(\sqrt{n})$ features without loss in prediction accuracy. We demonstrate the effectiveness of Attentive Pegasos on MNIST data.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Rapid Learning with Stochastic Focus of Attention does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Rapid Learning with Stochastic Focus of Attention, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Rapid Learning with Stochastic Focus of Attention will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-428757

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.