Machine learning in adaptive domain decomposition methods - Predicting the geometric location of constraints

More Info
expand_more

Abstract

Domain decomposition methods are robust and parallel scalable, preconditioned iterative algorithms for the solution of the large linear systems arising in the discretization of elliptic partial differential equations by finite elements. The convergence rate of these methods is generally determined by the eigenvalues of the preconditioned system. For second-order elliptic partial differential equations, coefficient discontinuities with a large contrast can lead to a deterioration of the convergence rate. A remedy can be obtained by enhancing the coarse space with elements, which are often called constraints, that are computed by solving small eigenvalue problems on portions of the interface of the domain decomposition, i.e., edges in two dimensions or faces and edges in three dimensions. In the present work, without restriction of generality, the focus is on two dimensions. In general, it is difficult to predict where these constraints have to be added, and therefore the corresponding local eigenvalue problems have to be computed, i.e., on which edges. Here, a machine learning based strategy using neural networks is suggested to predict the geometric location of these edges in a preprocessing step. This reduces the number of eigenvalue problems that have to be solved before the iteration. Numerical experiments for model problems and realistic microsections using regular decompositions as well as decompositions from graph partitioners are provided, showing very promising results.