Abstract
Big data generated by large-scale scientific and industrial applications need to be transferred between different geographical locations for remote storage, processing, and analysis. High-speed dedicated connections provisioned in High-performance Networks (HPNs) are increasingly utilized to carry out such big data transfer. HPN management highly relies on an important capability of performance (mainly throughput) prediction to reserve sufficient bandwidth and meanwhile avoid over-provisioning that may result in unnecessary resource waste. This capability is critical to improving the resource (mainly bandwidth) utilization of dedicated connections and meeting various user requests for data transfer. Conventional methods conduct performance prediction by fitting prior observed transfer history with predefined loss functions, without considering unobservable latent factors such as competing loads on end hosts. Such latent factors also have a significant impact on the application-level data transfer performance, which may result in an inaccurate prediction model. In this paper, we first investigate the impact of latent factors and propose a clustering-based method to eliminate their negative impact on performance prediction. We then develop a robust machine learning-based performance predictor by: i) incorporating the proposed latent factor elimination method into data preprocessing, and ii) adopting a customized domain guided loss function. Extensive experimental results show that our predictor achieves significantly higher prediction accuracy than several other state-of-the-art methods.