Skip to main content
SHARE
Publication

Efficient Distributed Sequence Parallelism for Transformer-Based Image Segmentation

Publication Type
Conference Paper
Journal Name
Electronic Imaging
Publication Date
Volume
36
Issue
12
Conference Name
Electronic Imaging 2024
Conference Location
San Francisco, California, United States of America
Conference Sponsor
IS&T
Conference Date
-

We introduce an efficient distributed sequence parallel approach for training transformer-based deep learning image segmentation models. The neural network models are comprised of a combination of a Vision Transformer encoder with a convolutional decoder to provide image segmentation mappings. The utility of the distributed sequence parallel approach is especially useful in cases where the tokenized embedding representation of image data are too large to fit into standard computing hardware memory. To demonstrate the performance and characteristics of our models trained in sequence parallel fashion compared to standard models, we evaluate our approach using a 3D MRI brain tumor segmentation dataset. We show that training with a sequence parallel approach can match standard sequential model training in terms of convergence. Furthermore, we show that our sequence parallel approach has the capability to support training of models that would not be possible on standard computing resources.