A Hybrid Framework for Accelerating Linear Solvers for Partial Differential Equations

Master Thesis (2025)
Author(s)

Y. Wu (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

A. Heinlein – Mentor (TU Delft - Numerical Analysis)

V. Dolean – Mentor (University of Strathclyde)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
26-08-2025
Awarding Institution
Delft University of Technology
Programme
['Computer Simulations for Science and Engineering (COSSE)']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Solving large-scale linear systems derived from partial differential equations (PDEs) is an important problem in the field of scientific computing. Classical stationary iterative methods are effective at eliminating high-frequency components of the error, but struggle with low-frequency components. Deep learning-based solvers like the Deep Operator Network (DeepONet) are excellent at learning low-frequency functions but suffer from the issue of spectral bias. The Hybrid Iterative Numerical Transferable Solver (HINTS) framework was recently proposed to combine these complementary strengths. However, the original HINTS framework has a significant convergence slowdown in later iterations. This thesis reveals that this problem is primarily caused by two limitations: (1) the accumulation of mid-frequency components in the error due to the different spectral preferences between classical stationary methods and the DeepONet, and (2) a distribution shift between the low-frequency-dominated training data and the mid-frequency-dominated residuals encountered during the the iterative process in the HINTS framework.

To address these limitations, this thesis introduces two enhancement strategies. First, we propose Gradient-Enhanced HINTS (GE-HINTS), a method that incorporates first-order derivative information into the DeepONet's loss function. Motivated by the anti-frequency principle, this approach mitigates the model's spectral bias, and thus improve the performance of HINTS. Second, we develop "HINTS-in-the-loop" training strategies, which makes the DeepONet model aware of the true residual distributions it will encounter during inference. This is achieved through both an offline data augmentation strategy and an online, end-to-end differentiable training loop that optimizes the solver's multi-step performance.

Numerical experiments on benchmark problems demonstrated the effectiveness of our proposed methods. Both GE-HINTS and the HINTS-in-the-loop strategies significantly accelerate the convergence of the single-level HINTS solver. Overall, this thesis provides
both mechanistic understanding and practical strategies for accelerating the HINTS framework. We hope these insights will aid researchers seeking effective hybrid iterative solvers and will contribute to further progress in this area.

Files

Yuhan_Wu_Master_Thesis.pdf
(pdf | 3.49 Mb)
License info not available