Text
Distributed Optimization Based on Gradient Tracking Revisited : Enhancing Convergence Rate via Surrogation
We study distributed multiagent optimization over graphs. We consider the minimization of F+G subject to convex constraints, where F is the smooth strongly convex sum of the agent's losses and G is a nonsmooth convex function. We build on the SONATA algorithm: the algorithm employs the use of surrogate objective functions in the agents' subproblems (thus going beyond linearization, such as proximal-gradient) coupled with a perturbed consensus mechanism that aims to locally track the gradient of F. SONATA achieves precision ϵ>0 on the objective value in O(κglog(1/ϵ)) gradient computations at each node and O~(κg(1−ρ)−1/2log(1/ϵ)) communication steps, where κg is the condition number of F and ρ characterizes the connectivity of the network. This is the first linear rate result for distributed composite optimization; it also improves on existing (nonaccelerated) schemes just minimizing F, whose rate depends on much larger quantities than κg. When the loss functions of the agents are similar, due to statistical data similarity or otherwise, SONATA employing high-order surrogates achieves precision ϵ>0 in O((β/μ)log(1/ϵ)) iterations and O~((β/μ)(1−ρ)−1/2log(1/ϵ)) communication steps, where β measures the degree of similarity of agents' losses and μ is the strong convexity constant of F. Therefore, when β/μ
Barcode | Tipe Koleksi | Nomor Panggil | Lokasi | Status | |
---|---|---|---|---|---|
art142432 | null | Artikel | Gdg9-Lt3 | Tersedia namun tidak untuk dipinjamkan - No Loan |
Tidak tersedia versi lain