ADMMLIB: A Library of Communication-Efficient AD-ADMM for Distributed Machine Learning - Network and Parallel Computing
Conference Papers Year : 2019

ADMMLIB: A Library of Communication-Efficient AD-ADMM for Distributed Machine Learning

Abstract

Alternating direction method of multipliers (ADMM) has recently been identified as a compelling approach for solving large-scale machine learning problems in the cluster setting. To reduce the synchronization overhead in a distributed environment, asynchronous distributed ADMM (AD-ADMM) was proposed. However, due to the high communication overhead in the master-slave architecture, AD-ADMM still cannot scale well. To address this challenge, this paper proposes the ADMMLIB, a library of AD-ADMM for distributed machine learning. We employ a set of network optimization techniques. First, hierarchical communication architecture is utilized. Second, we integrate ring-based allreduce and mixed precision training into ADMMLIB to further effectively reduce the inter-node communication cost. Evaluation with large dataset demonstrates that ADMMLIB can achieve significant speed up, up to 2x, compared to the original AD-ADMM implementation, and the overall communication cost is reduced by 83%.
Fichier principal
Vignette du fichier
486810_1_En_27_Chapter.pdf (908.61 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-03770546 , version 1 (06-09-2022)

Licence

Identifiers

Cite

Jinyang Xie, Yongmei Lei. ADMMLIB: A Library of Communication-Efficient AD-ADMM for Distributed Machine Learning. 16th IFIP International Conference on Network and Parallel Computing (NPC), Aug 2019, Hohhot, China. pp.322-326, ⟨10.1007/978-3-030-30709-7_27⟩. ⟨hal-03770546⟩
28 View
74 Download

Altmetric

Share

More