1 

Estimating the extreme value index for imprecise data
In extreme value theory the focus is on the tails of the distribution. The main focus is to estimate the tail distribution for a rounded data set. To estimate this tail distribution the extreme value index should be estimated, but due to the rounded data this extreme value index oscillates heavily. Therefore a correct estimate can not be obtained. By adding a small uniform stochast the rounded data can be smoothend out and in this way the oscillation can be cancelled.

[PDF]
[Abstract]

2 

Investigation of Different Solvers for Radiotherapy Treatment Planning Problems
Radiotherapy treatment planning involves solving inequality constrained minimization problems. The currently used interior point solver performs well, but is considered relatively slow. In this thesis we investigate two different solvers based on the logarithmic barrier method and Sequential Quadratic Programming (SQP) respectively. We argue that the behaviour of the logarithmic barrier solver is uncertain, thereby making it generally unreliable in this context. In addition we substantiate that the performance of the SQP solver is solid, but lacks efficiency in computing the minimizers of its related quadratic subproblems.
We conclude that without serious improvements, none of the solvers investigated are faster than the currently used interior point optimizer.

[PDF]
[Abstract]

3 

Optimal boundary point control for linear elliptic equations
This master thesis is concerned with optimal boundary point and function control problems for linear elliptic equations subject to control constraints. The elliptic partial differential equation with Robin boundary condition is considered. The control is chosen as a linear combination of the Dirac delta functions in the point control problem. The weak formulation and optimality conditions are obtained for the function control problem. The main goal is to examine the existence of the weak solution and to derive optimality conditions of boundary point control. Introducing sufficient discretization methods such as a finite volume and a finite element methods, we obtain finite dimensional problem.
We apply efficient numerical methods including primaldual active set strategy, projected gradient and conjugate gradient methods. The test examples are presented clarifying the performance of numerical methods mentioned earlier. The conjugate gradient method is compared to the unconstrained matlab function QUADPROG and primaldual active set strategy is compared to the constrained QUADPROG. The projected conjugate gradient method is applied to improve the projected gradient method. Due to its importance, the sparse point and function control problems are studied. We apply primaldual active set strategy where the sparse parameter f is chosen differently. All of the results are presented in both point and function control problems.

[PDF]
[Abstract]

4 

Investigation of Different Solvers for Radiotherapy Treatment Planning Problems
Radiotherapy treatment planning involves solving inequality constrained minimization problems. The currently used interior point solver performs well, but is considered relatively slow. In this thesis we investigate two different solvers based on the logarithmic barrier method and Sequential Quadratic Programming (SQP) respectively. We argue that the behaviour of the logarithmic barrier solver is uncertain, thereby making it generally unreliable in this context. In addition we substantiate that the performance of the SQP solver is solid, but lacks efficiency in computing the minimizers of its related quadratic subproblems. We conclude that without serious improvements, none of the solvers investigated are faster than the currently used interior point optimizer.

[PDF]
[Abstract]

5 

Optimal distributed point control of linear elliptic equations
In this master thesis, we consider the optimal function and point control problems governed by linear elliptic partial differential equations together with bilateral control constraints. The aim of this work is to choose a control function by linear combination of the Dirac delta and solve the optimal distributed point control problem. In this work, optimality systems of point and function control problems are derived by Lagrangian principle and reduced functional respectively. The optimality system is discretized by the finite element method (FEM) and finite volume method (FVM). We apply a semismooth Newton (SSN) method which is equivalent to a Primal dual active set strategy (PDASS) to solve the discretized optimality system. As a second solution method, we propose Projected gradient (PG) method for the same problem. For each method, we compare the result of FEM and FVM and give a preference. In order to have the best solution method, we compare the results of PDASS and PG methods to the results of matlab command QUADPROG. We have to solve two partial differential equations in every iteration, namely the state and the adjoint equations. Therefore, we develop a Multigrid Preconditioned Conjugate Gradient (MGPCG) method for solving the discretized optimality system as fast as possible. Finally, we consider the linear elliptic optimal control problems with L1 norms in the cost functional which results sparse control problem. Due to L1 norm, objective functional becomes nondfferentiable and the optimal controls are identically zero on large parts of the control domain. Using an appropriate smoothing of the nondifferentiable terms for the cost functional, we solve the optimal sparse control problems theoretically and numerically.

[PDF]
[Abstract]

6 

Online learning algorithms: methods and applications
In this research we study some online learning algorithms in the online convex optimization framework. Online learning algorithms make sequential decisions where there is no information about the future events in time. To measure the success of a decision there is a (bounded) cost for each decision, which is revealed after every time step. The goal of these algorithms is to perform roughly as well as the best fixed decision in hindsight and this will be measured by calculating what we will call the regret.
The algorithms that we studied are the Multiplicative Weights Method, the Weighted Majority Algorithm, the Online Gradient Descent Method and the Online Newton Step Algorithm. All of these methods have their own applications and regret bounds. Which are also studied and in particular, we give a proof for the Multiplicative Weights Method and the Weighted Majority Algorithm.
Furthermore, we provide an implementation of the Online Gradient Descent Method on a portfolio management problem where we have four stocks and we seek the best investment strategy to obtain the biggest profit over the stocks. This has lead to a wealth increase by a factor 1.37 after four years of investing where the best fixed distribution over the stocks in hindsight leads to a wealth increase of 1.68. Further analysis shows that the implementation works better then we could be expecting beforehand.

[PDF]
[Abstract]

7 

Efficiency Improvement of Panel Codes
Panel codes are used by the Maritime Reseach Institute Natherlands (MARIN) to compute flows around ships and propellers. These codes are based on Boundary Element Methods (BEM). A known drawback of BEM is that it forms dense linear system of equations that have to be solved. By improving the efficiency of the dense linear solver, the computational time required by panel codes can be significantly reduced. Since applications of panel codes in MARIN include automatic optimization, where a large number of hull forms or propeller geometries have to be evaluated, the reduction of computational time is important.
Four strategies were explored to improve the performance of the dense linear solver. First, to replace the current GMRES solver with IDR(s). Second, the updating of a fixed size block Jacobi preconditioner into a variable size block Jacobi preconditioner. Third, to use a hierarchical matrixvector multiplication in the solver instead of dense matrixvector multiplication. Lastly, to replace the block Jacobi preconditioner with a hierarchicalLU preconditioner. Out of the four strategies, the use of hierarchicalLU preconditioner was found to speed up the dense linear solver substantially, especially for large systems. The use of IDR(s) instead of GMRES is also recommended as it removes the problems introduced by the need to restart.
This report discusses the theory, implementation and test results obtained from the four strategies aforementioned. As a result of this project, the use of IDR(s) combined with hierarchicalLU preconditioner is recommended to be implemented in the panel codes.

[PDF]
[Abstract]

8 

A mathematical model of cell migration and deformation in various situations
A mathematical model of the migration and deformation of cells on a cellular scale is simulated. The cells are represented by a boundary and a nucleus. The boundary and nucleus are divided into several points, where the boundary points as well as the nucleus points are connected to each other by a series of springs. The model considers one or more cells (with different shapes), which are attracted by either one source, several sources or other cells. Finally the deformation of the cells in these different situations will be discussed by comparing the socalled Cell Shape Index.

[PDF]
[Abstract]

9 

Variable Selection
In this thesis several methods for variable selection for statistical models are examined. There is specific attention for variable selection in so called "dependent data", in which there exists a strong correlation between the independent variables.

[PDF]
[Abstract]

10 

Solving the mTSP for fresh food delivery
A starting company intends to deliver fresh baby food at home addresses in the area of Zwolle, the Netherlands. The limited time windows of delivery encourage the notion of dividing the addresses over several shifts. Dividing optimal routes over these shifts is shown to be an instance of the multiple traveling salesman problem (mTSP). Three problems are posed, all relating in one way or another to solving the mTSP:
1. Solve the mTSP for the specified delivery area;
2. Find an efficient method for solving an mTSP in which certain nodes may only be visited by certain salesmen;
3. Estimate the largest amount of customers that may be serviced per four hour shift.
A number of methods to solve the mTSP are presented: namely branch and bound, greedy search, simulated annealing and neural networks. The first three of these methods are also modified to handle customer availabilities on specific shifts. The methods are tested and compared within the context of the three different problems.
The results of these tests are discussed and solutions to the three problems are suggested.

[PDF]
[Abstract]

11 

Heuristics of heavytailed distributions and the Obesity index
In this thesis a new metric for the heavytailedness of dataset is proposed.

[PDF]
[Abstract]

12 

De geïmpliceerde boom en de scheefheid van BlackScholes

[PDF]

13 

Curvaturedriven grain growth

[PDF]

14 

Stochastic Modeling of Order Book Dynamics
In this project the order book model proposed by Cont et al. [10] is used as a starting point to model order book dynamics. This model nicely combines three desirable properties from earlier studies: it is easy to calibrate, it reproduces statistical properties of the order book and it allows to make analytical computations in the order book. The model is studied, calibrated and tested on realtime data from the London Stock Exchange. Possible improvements to the model are discussed and tested. A method to compute probabilities in the model will be presented: recovering densities by inverting continued fraction representations of Laplace transforms. This is also implemented and evaluated.

[PDF]
[Abstract]

15 

Translocation of Heterogeneous Polymers through a Nanopore
Translocating a chain of different beads through a very small pore can be used as a first step of modelling a DNA chain that passes through a nanopore. This translocation process offers a variety of possibilities in chemical and biological processes, for instance rapid DNA sequencing. In this thesis the chain is modelled as a polymer with different types of monomers as beads. The translocation dynamics of heterogeneous polymers through nanopores can be modelled using the LJ and FENE potentials and different interaction strengths between the monomers of the polymer and the pore. The translocation time gives important information of the chain sequence, depending on the length of the polymer. The waiting time is defined as the time a specific monomer stays inside the pore. This waiting time in particular gives useful results considering the chain sequence. Simulations reveal that the waiting time of the last monomer can define the type of monomer under consideration. Monomers with a high interaction with the pore will stay inside considerably longer. We found that from the average waiting time it is possible to retrieve the original sequence of the beads constituting the chain.

[PDF]
[Abstract]

16 

Symbolic dynamics and automata theory
The equivalence of streams under transducers is investigated, as introduced by Klop 2011. In the process, some morphic properties of the Toeplitz words as first described by Keane are discovered.

[PDF]
[Abstract]

17 

Validating a short term financial risk model
This thesis project considers validation methods for an existing solvency model for pension funds. The solvency model produces forecasts about the development of financial markets, fund investments, liabilities and, most important, the solvency of the fund. Since the model is a stochastic model, statistical inference is used to compare model outcomes with realized quantities. Several known methods are studied and described in this thesis to execute this model validation. These methods are applied on the solvency model. A testing procedure of risk driver forecasts is implemented and evaluated. Since a lot of data is needed to get a reliable outcome of the validation process, more data from inside th e model must be used and combined to get a better risk model.

[PDF]
[Abstract]

18 

The Ruijsenaars Function Transform

[PDF]

19 

Unit Root Testing for AR(1) Processes
The purpose of this study is to investigate the asymptotics of a first order auto regressive unit root process, AR(1). The goal is to determine which tests could be used to test for the presence of a unit root in a first order auto regressive process. A unit root is present when the root of the characteristic equation of this process equals unity. In order to test for the presence of a unit root, we developed an understanding of the characteristics of the AR(1) process, such that the difference between a trend stationary process and a unit root process is clear.
The first test that will be examined is the DickeyFuller test. The estimator of this test is based on Ordinary Least Square Regression and a ttest statistic, which is why we have computed an ordinary least square estimator and the test statistic to test for the presence of unit root in the first order auto regressive process. Furthermore we examined the consistency of this estimator and its asymptotic properties. The limiting distribution of the test statistic is known as the DickeyFuller distribution. With a Monte Carlo approach, we implemented the DickeyFuller test statistic in Matlab and computed the (asymptotic) power of this test. Under the assumption of Gaussian innovations (or shocks) the limiting distribution of the unit root process is the same as without the normality assumption been made. When there is a reason to assume Gaussianity of the innovations, the Likelihood Ratio test can be used to test for a unit root.
The asymptotic power envelope is obtained with help of the Likelihood Ratio test, since the NeymanPearson lemma states that the Likelihood Ratio test is the point optimal test for simple hypotheses. By calculating the likelihood functions the test statistic was obtained, such that an explicit formula for the power envelope was found. Since each fixed alternative results in a different critical value and thus in a different unit root test, there is no uniform most powerful test available. Instead we are interested in asymptotically point optimal tests and we will analyze which of these point optimal tests is the overall best performing test. By comparing the asymptotic powercurve to the asymptotic power envelope for each fixed alternative we could draw a conclusion on which fixed alternative results in the overall best performing test.
On the basis of the results of this research, it can be concluded that there does not exist a uniform most powerful test, nonetheless we can define an overall best performing test.

[PDF]
[Abstract]

20 

Multicriteria Optimization for Radiotherapy
Radiotherapy is one of the main treatments for cancer, and a multidisciplinary field of research, mostly involving medicine, physics and mathematics. The focus of this thesis lies in improvements for the treatment planning, which is a multicriteria process. We have been constructing a new method to form the Pareto front for certain objectives. For a Pareto optimal plan one objective cannot be improved without worsening another objective.
The main question is with which plan the patient should be treated. To compare treatments we have developed a new method to look at other optimal solutions on the Pareto front around a given solution. This is done by quickly using of the reference point method. The main advantage is that, when a clinical isn’t pleased enough with a certain objective in a given solution, it is possible the optimize that objective some more.

[PDF]
[Abstract]
