Skip to main content
SHARE
Publication

A Heterogeneity-Aware Task Scheduler for Spark...

by Luna Xu, Ali Butt, Seung-hwan Lim, Ramakrishnan Kannan
Publication Type
Conference Paper
Journal Name
IEEE International Conference on Cluster Computing (IEEE CLUSTER)
Publication Date
Volume
2018
Issue
0
Conference Name
IEEE Intenrational Conference on Cluster Computing
Conference Location
Belfast, United Kingdom
Conference Sponsor
IEEE
Conference Date
-

Big data processing systems such as Spark are employed in an increasing number of diverse applications—such as machine learning, graph computation, and scientific computing—each with dynamic and different resource needs. These applications increasingly run on heterogeneous hardware, e.g., with out-of-core accelerators. However, big data platforms do not factor in the multi-dimensional heterogeneity of applications and hardware. This leads to a fundamental mismatch between the application and hardware characteristics, and the resource scheduling adopted in big data platforms. For example, Hadoop and Spark consider only data locality when assigning tasks to nodes, and typically disregard the hardware capabilities and suitability to specific application requirements.

In this paper, we present RUPAM, a heterogeneity-aware task scheduling system for big data platforms, which considers both task-level resource characteristics and underlying hardware characteristics, as well as preserves data locality. RUPAM adopts a simple yet effective heuristic to decide the dominant scheduling factor (e.g., CPU, memory, or I/O), given a task in a particular stage. Our experiments show that RUPAM is able to improve the performance of representative applications by up to 62.3% compared to the standard Spark scheduler.