Parallel Approach to Derivative-Free Optimization

Implementing the DONE Algorithm on a GPU

More Info
expand_more

Abstract

Researchers at Delft University of Technology have recently developed an algorithm for optimizing noisy, expensive and possibly nonconvex objective functions for which no derivatives are available. The data-based online nonlinear extremum-seeker (DONE) was originally developed for sensorless wavefront aberration correction in optical coherence tomography (OCT) and optical beam forming network (OBFN) tuning. In order to make the DONE algorithm suitable for large-scale problems, a parallel implementation using a graphics processing unit (GPU) is considered. This master thesis aims to develop such a parallel implementation which performs faster than the existing sequential implementation without much change in obtained accuracy. Since OBFN tuning is a problem that may involve a large amount of parameters, an OBFN simulation is to be used to compare the parallel implementation to the sequential implementation. The key of the DONE algorithm is solving a regularized linear least-squares problem in order to construct a smooth and low-cost surrogate function which does provide derivatives and can be optimized fairly easily. This master thesis first discusses the basics of parallel computing, after which several linear least-squares methods and several numerical optimization methods are investigated. These methods are compared and the most suitable methods for parallel computing are implemented and tested for increasing dimensions. The final parallel DONE implementation combines the recursive least-squares (RLS) method with the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and optimizes the largescale OBFN simulation almost twice as fast as the sequential DONE implementation, without much change in obtained accuracy.