Optimizing Sparse Tensor Train Decomposition to Solve Linear Systems of Equations
C. Bakos (TU Delft - Electrical Engineering, Mathematics and Computer Science)
Rob Kooij – Graduation committee member (TU Delft - Quantum & Computer Engineering)
Huijuan Wang – Graduation committee member (TU Delft - Multimedia Computing)
Xiaohan Li – Graduation committee member (Deltares)
Didrik Meijer – Mentor (Deltares)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This thesis investigates the potential of solving large-scale linear systems of equations (LSEs) in the Tensor Train (TT) format. First, we study when this approach can outperform traditional solvers like Conjugate Gradient (CG) and Gaussian Elimination (GE). In turn, we examine the time complexity of the TT-solve technique when applied to ill-conditioned, sparse, and symmetric positive definite (SPD) matrices. This enables us to assess the scenarios where TT-solve may offer significant advantages.
During this exploration, we have also identified and proposed solutions for a handful of research gaps that could further optimize the TT-solve process. These include challenges related to poorly factorizing matrix sizes, as well as the need for a deeper understanding of the trade-offs between TT-matrix (TTM) rank and maximal mode size during the decomposition process. Additionally, the study examined the potential for altering the coefficient matrix through techniques like variable reordering and partial Gaussian elimination (PG) to achieve lower TTM-ranks without affecting the solution of the LSE.