Option Pricing Techniques

Using Neural Networks

Master Thesis (2022)
Author(s)

L.C.F. Van Mieghem (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

A. Papapantoleon – Mentor (TU Delft - Applied Probability)

F. Fang – Graduation committee member (TU Delft - Numerical Analysis)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2022 Laurens Van Mieghem
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Laurens Van Mieghem
Graduation Date
27-07-2022
Awarding Institution
Delft University of Technology
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

With the emergence of more complex option pricing models, the demand for fast and accurate numerical pricing techniques is increasing. Due to a growing amount of accessible computational power, neural networks have become a feasible numerical method for approximating solutions to these pricing models. This work concentrates on analysing various neural network architectures on option pricing optimisation problems in a supervised and semi-supervised learning setting. We compare the mean-squared error (MSE) and computational training time of a multilayer perceptron (MLP), highway architecture and a recently developed DGM network (Sirignano et al., 2018) along with slight variations on the Black-Scholes and Heston European call option pricing problem as well as the implied volatility problem. We find that on nearly all the supervised learning problems, the generalised highway architecture outperforms its counterparts in terms of MSE relative to computation time. On the Black-Scholes problem, we noticed a reduction of 9.8% in MSE for the generalised highway network while containing 96.2% fewer parameters compared to the MLP considered in (Liu et al., 2019).

On the semi-supervised learning problem, where we directly optimise the neural network to fit the partial differential equation (PDE) and boundary/initial conditions, we concluded that the network architecture of the DGM allows for optimisation of both the interior condition as well as the non-smooth terminal condition. As this was not the case for the MLP and highway networks, the DGM network turned out to be the best performing network architecture on the semi-supervised learning problems. Additionally, we found indications that on the semi-supervised learning problem the performance of the DGM network remained consistent when increasing the dimensionality of the problem.

Files

License info not available