Computer Science – Information Theory
Scientific paper
2011-12-13
Computer Science
Information Theory
32 pages, journal, submitted Nov 30, 2011
Scientific paper
The paper proposes new fast distributed optimization gradient methods and proves convergence to the exact solution at rate O(\log k/k), much faster than existing distributed optimization (sub)gradient methods with convergence O(1/\sqrt{k}), while incurring practically no additional communication nor computation cost overhead per iteration. We achieve this for convex (with at least one strongly convex,) coercive, three times differentiable and with Lipschitz continuous first derivative (private) cost functions. Our work recovers for distributed optimization similar convergence rate gains obtained by centralized Nesterov gradient and fast iterative shrinkage-thresholding algorithm (FISTA) methods over ordinary centralized gradient methods. We also present a constant step size distributed fast gradient algorithm for composite non-differentiable costs. A simulation illustrates the effectiveness of our distributed methods.
Jakovetic Dusan
Moura Jose M. F.
Xavier Joao
No associations
LandOfFree
Fast Distributed Gradient Methods does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Fast Distributed Gradient Methods, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Fast Distributed Gradient Methods will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-487766