Skip to main content
SHARE
Publication

Anderson Acceleration for Distributed Training of Deep Learning Models...

by Massimiliano Lupo Pasini, Junqi Yin, Viktor Reshniak, Miroslav K Stoyanov
Publication Type
Conference Paper
Book Title
SoutheastCon 2022
Publication Date
Page Numbers
289 to 295
Publisher Location
United States of America
Conference Name
IEEE SoutheastCon 2022
Conference Location
Mobile, Alabama, United States of America
Conference Sponsor
IEEE
Conference Date
-

Anderson acceleration (AA) is an extrapolation technique that has recently gained interest in the deep learning (DL) community to speed-up the sequential training of DL models. However, when performed at large scale, the DL training is exposed to a higher risk of getting trapped into steep local minima of the training loss function, and standard AA does not provide sufficient acceleration to escape from these steep local minima. This results in poor generalizability and makes AA ineffective. To restore AA’s advantage to speed-up the training of DL models on large scale computing platforms, we combine AA with an adaptive moving average procedure that boosts the training to escape from steep local minima. By monitoring the relative standard deviation between consecutive iterations, we also introduce a criterion to automatically assess whether the moving average is needed. We applied the method to the following DL instantiations for image classification: (i) ResNet50 trained on the open-source CIFAR100 dataset and (ii) ResNet50 trained on the open-source ImageNet1k dataset. Numerical results obtained using up to 1,536 NVIDIA V100 GPUs on the OLCF supercomputer Summit showed the stabilizing effect of the moving average on AA for all the problems above.