Skip to main content
SHARE
Publication

Strategies to Deploy and Scale Deep Learning on the Summit Supercomputer...

Publication Type
Conference Paper
Book Title
2019 IEEE/ACM Third Workshop on Deep Learning on Supercomputers (DLS)
Publication Date
Page Numbers
84 to 94
Publisher Location
New York, United States of America
Conference Name
Third workshop on Deep Learning on Supercomputers (DLS), SC19
Conference Location
Denver, Colorado, United States of America
Conference Sponsor
IEEE TCHPC & ACM
Conference Date
-

The rapid growth and wide applicability of Deep Learning (DL) frameworks poses challenges to computing centers which need to deploy and support the software, and also to domain scientists who have to keep up with the system environment and scale up scientific exploration through DL. We offer recommendations for deploying and scaling DL frameworks on the Summit supercomputer, currently atop the Top500 list, at the Oak Ridge National Laboratory Leadership Computing Facility (OLCF). We discuss DL software deployment in the form of containers, and compare performance of native-built frameworks and containerized deployment. Software containers show no noticeable negative performance impact and exhibit faster Python loading times and promise easier maintenance. To explore strategies for scaling up DL model training campaigns, we assess DL compute kernel performance, discuss and recommend I/O data formats and staging, and identify communication needs for scalable message exchange for DL runs at scale. We recommend that users take a step-wise tuning approach beginning with algorithmic kernel choice, node I/O configuration, and communications tuning as best-practice. We present baseline examples of scaling efficiency 87% for a DL run of ResNet50 running on 1024 nodes (6144 V100 GPUs).