Computer Science – Learning
Scientific paper
2011-05-11
Computer Science
Learning
Scientific paper
In this paper, we focus on the question of the extent to which online learning can benefit from distributed computing. We focus on the setting in which $N$ agents online-learn cooperatively, where each agent only has access to its own data. We propose a generic data-distributed online learning meta-algorithm. We then introduce the Distributed Weighted Majority and Distributed Online Mirror Descent algorithms, as special cases. We show, using both theoretical analysis and experiments, that compared to a single agent: given the same computation time, these distributed algorithms achieve smaller generalization errors; and given the same generalization errors, they can be $N$ times faster.
Gray Alexander
Ouyang Hua
No associations
LandOfFree
Data-Distributed Weighted Majority and Online Mirror Descent does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Data-Distributed Weighted Majority and Online Mirror Descent, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Data-Distributed Weighted Majority and Online Mirror Descent will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-612186