Skip to main content
SHARE
Publication

Efficient Parallel Sparse Symmetric Tucker Decomposition for High-Order Tensors...

by Shruti Shivakumar, Jiajia Li, Ramakrishnan Kannan, Srinivas Aluru
Publication Type
Conference Paper
Book Title
Proceedings of the 2021 SIAM Conference on Applied and Computational Discrete Algorithms (ACDA21)
Publication Date
Page Numbers
193 to 204
Publisher Location
Pennsylvania, United States of America
Conference Name
SIAM Conference on Applied and Computational Discrete Algorithms (ACDA)
Conference Location
Spokane, Washington, United States of America
Conference Sponsor
SIAM
Conference Date
-

Tensor based methods are receiving renewed attention in recent years due to their prevalence in diverse real-world applications. There is considerable literature on tensor representations and algorithms for tensor decompositions, both for dense and sparse tensors. Many applications in hypergraph analytics, machine learning, psychometry, and signal processing result in tensors that are both sparse and symmetric, making it an important class for further study. Similar to the critical Tensor Times Matrix chain operation (TTMc) in general sparse tensors, the Sparse Symmetric Tensor Times Same Matrix chain (S3TTMc) operation is compute and memory intensive due to high tensor order and the associated factorial explosion in the number of non-zeros. In this work, we present a novel compressed storage format CSS for sparse symmetric tensors, along with an efficient parallel algorithm for the S3TTMc operation. We theoretically establish that S3TTMc on CSS achieves a better memory versus run-time trade-off compared to state-of-the-art implementations. We demonstrate experimental findings that confirm these results and achieve up to 2.9× speedup on synthetic and real datasets.