The Generalization Ability of Online Algorithms for Dependent Data

Statistics – Machine Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

25 pages, 1 figure

Scientific paper

We study the generalization performance of arbitrary online learning algorithms trained on samples coming from a dependent source of data. We show that the generalization error of any stable online algorithm concentrates around its regret--an easily computable statistic of the online performance of the algorithm--when the underlying ergodic process is $\beta$- or $\phi$-mixing. We show high probability error bounds assuming the loss function is convex, and we also establish sharp convergence rates and deviation bounds for strongly convex losses and several linear prediction problems such as linear and logistic regression, least-squares SVM, and boosting on dependent data. In addition, our results have straightforward applications to stochastic optimization with dependent data, and our analysis requires only martingale convergence arguments; we need not rely on more powerful statistical tools such as empirical process theory.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

The Generalization Ability of Online Algorithms for Dependent Data does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with The Generalization Ability of Online Algorithms for Dependent Data, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and The Generalization Ability of Online Algorithms for Dependent Data will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-631938

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.