Skip to main content
SHARE
Publication

Elastic distributed training with fast convergence and efficient resource utilization...

by Guojing Cong
Publication Type
Conference Paper
Book Title
2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)
Publication Date
Page Numbers
1 to 8
Publisher Location
New Jersey, United States of America
Conference Name
International Conference on Machine Learning and Applications (ICMLA)
Conference Location
remote, California, United States of America
Conference Sponsor
IEEE
Conference Date
-

Distributed learning is now routinely conducted on cloud as well as dedicated clusters. Training with elastic resources brings new challenges and design choices. Prior studies focus on runtime performance and assume a static algorithmic behavior. In this work, by analyzing the impact of of resource scaling on convergence, we introduce schedules for synchronous stochastic gradient descent that proactively adapt the number of learners to reduce training time and improve convergence. Our approach no longer assumes a constant number of processors throughout training. In our experiment, distributed stochastic gradient descent with dynamic schedules and reduction momentum achieves better convergence and significant speedups over prior static ones. Numerous distributed training jobs running on cloud may benefit from our approach.