# Can You Parallelize Time?

### Moving through the complexity of time

##
More Info
expand_more

## Abstract

This Bachelor End Project Thesis is about creating parallel variants of time-dependent methods. To achieve this, the time-dependent problem is formulated as a large system of linear equations that is solved iteratively. This linear system is determined from the time-dependent linear heat equation in 1D that all the analysis is about. The experiments are tested for different boundary condition combinations. Namely, for a mixture of Dirichlet, Neumann and Robin conditions. Also, three different time integration methods are used, namely Forward Euler, Backward Euler and the θ-Rule for θ = 0.2, 0.4, 0.6 and θ = 0.8. Since time is continuous and moves in the positive direction, time integration methods need the previous solution in order to find the current solution. Executing a calculation in parallel time is therefore a complex and tricky phenomenon. This research considers two different iterative methods, the block Jacobi method and the block GaussSeidel method. Both have a lot in common but the main difference lies in the use of different parts of the matrix. Where the Jacobi method uses just the diagonal in order to approximate the inverse, the Gauss-Seidel method also uses the upper or lower triangular part of the matrix. Therefore, creating a parallel variant for the block Jacobi method is easily possible, but for the block Gauss-Seidel method it is not. In order to get to a valid conclusion, the stability of each time integration method is considered. The block Jacobi method converges slowly to the solution since it uses the diagonal for every step instead of the whole matrix. Therefore, in this research, combinations of the block Gauss-Seidel method have been created and tested. Three different variants are discussed. The first one is the step-wise combination method which uses a block Jacobi iteration for every step size chosen, and uses the sequential Gauss-Seidel time steps for all the other iterations. The second variant is called the Red-Black Ordering and is known for its odd and even iterations. Here every odd iteration is done in parallel and similar for its even iterations. Since the equation only needs its previous solution, this method would work very well in parallel. The last variant uses the number of processors to manipulate the maximum number of iterations. Here the method converges earlier or at the number of processors chosen. What can be concluded from the results is that the way a method converges all depends on how many Jacobi iterations take place during the computation. The Gauss-Seidel calculation only takes one iteration, so the time until the solution converges all depends on how many Jacobi iterations the method computes. Having a computer with more processors than number of Jacobi iterations performed, will also speed-up the computation. Our predictions show in particular that with only implicit time integration methods the speed-up can really be achieved. For the block Jacobi method, this equals that the maximum number of iterations is also the number of time steps nt. The Gauss-Seidel variants are all more efficient, since less Jacobi iterations are used. For variant 1, the step-wise combination method, the maximum number of iterations equals nt/step . For the Red-Black Ordering method, every other iteration, a Jacobi step is used so the maximum number of iterations equals nt/2. The last variant, the processor-wise combination method, is a special variant of the first variant. This method has the characteristic that the maximum number of iterations equals the chosen number of processors, np and therefore has, in terms of the first variant, a ’step’-size of nt/np. This project has been conducted with Matlab which did not fully cooperate when using the parallel time function. In order to really test, and therefore not predict, the obvious way to perform a parallel computation is to implement this program in another programming language and later on test it on the DelftBlue supercomputer. The predictions should still hold but the results will be more clearly visualised.