Skip to main content
SHARE
Publication

A Scalable Pipeline for Gigapixel Whole Slide Imaging Analysis on Leadership Class HPC Systems...

Publication Type
Conference Paper
Book Title
IEEE International Symposium on Parallel and Distributed Processing Workshops and Phd Forum (IPDPSW)
Publication Date
Page Numbers
1266 to 1274
Publisher Location
New Jersey, United States of America
Conference Name
ExSAIS 2022: Workshop on Extreme Scaling of AI for Science
Conference Location
Virtual, Tennessee, United States of America
Conference Sponsor
IEEE
Conference Date
-

Whole Slide Imaging (WSI) captures microscopic details of a patient's histopathological features at multiple res-olutions organized across different levels. Images produced by WSI are gigapixel-sized, and saving a single image in memory requires a few gigabytes which is scarce since a complicated model occupies tens of gigabytes. Performing a simple met-ric operation on these large images is also expensive. High-performance computing (HPC) can help us quickly analyze such large images using distributed training of complex deep learning models. One popular approach in analyzing these images is to divide a WSI image into smaller tiles (patches) and then train a simpler model with these reduced-sized but large numbers of patches. However, we need to solve three pre-processing challenges efficiently for pursuing this patch-based approach. 1) Creating small patches from a high-resolution image can result in a high number (hundreds of thousands per image) of patches. Storing and processing these images can be challenging due to a large number of I/O and arithmetic operations. To reduce I/Oand memory accesses, an optimal balance between the size and number of patches must exist to reduce I/O and memory accesses. 2) WSI images may have tiny annotated regions for cancer tissue and a significant portion with normal and fatty tissues; correct patch sampling should avoid dataset imbalance. 3) storing and retrieving many patches to and from disk storage might incur I/O latency while training a deep learning model. An efficient distributed data loader should reduce I/O latency during the training and inference steps. This paper explores these three challenges and provides empirical and algorithmic solutions deployed on the Summit supercomputer hosted at the Oak Ridge Leadership Computing Facility.