Computer Science – Learning
Scientific paper
2004-05-12
Proc. 15th International Conf. on Algorithmic Learning Theory (ALT-2004), pages 279-293
Computer Science
Learning
16 LaTeX pages
Scientific paper
When applying aggregating strategies to Prediction with Expert Advice, the learning rate must be adaptively tuned. The natural choice of sqrt(complexity/current loss) renders the analysis of Weighted Majority derivatives quite complicated. In particular, for arbitrary weights there have been no results proven so far. The analysis of the alternative "Follow the Perturbed Leader" (FPL) algorithm from Kalai (2003} (based on Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate and both finite expert classes with uniform weights and countable expert classes with arbitrary weights. For the former setup, our loss bounds match the best known results so far, while for the latter our results are (to our knowledge) new.
Hutter Marcus
Poland Jan
No associations
LandOfFree
Prediction with Expert Advice by Following the Perturbed Leader for General Weights does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Prediction with Expert Advice by Following the Perturbed Leader for General Weights, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Prediction with Expert Advice by Following the Perturbed Leader for General Weights will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-536456