Circular Image

A. Papapantoleon

20 records found

Importance Sampling and Quantile Estimation for Concentration Credit Risk

Efficient algorithms for assessing concentration in credit portfolios

This thesis investigates efficient Monte Carlo methods for estimating the 99.9% Value-at-Risk of
concentrated credit portfolios modelled through a normal copula framework. Crude Monte Carlo
simulation is inefficient when estimating extreme loss levels. To address this ine ...
This study explores the application of risk-sensitive Reinforcement Learning (RL) in portfolio optimization, aiming to integrate asset pricing and portfolio construction into a unified, end-to-end RL framework. While RL has shown promise in various domains, its traditional risk-n ...

This thesis addresses the portfolio allocation problem within a financial market featuring one riskless asset and a risky asset exhibiting rough Bergomi volatility. The objective is to maximize the expected utility of terminal wealth with respect to power utility. The volati ...
Blockchain-based payment systems typically assume a synchronous communication network and a limited workload to confirm transactions within a bounded timeframe. These assumptions make such systems less effective in scenarios where reliable network access is not guaranteed.
Of ...
In this thesis, we aim to improve the application of deep reinforcement learning in portfo- lio optimization. Reinforcement learning has in recent years been applied to a wide range of problems, from games to control systems in the physical world and also to finance. While reinfo ...
Implied volatility surfaces (IVS) are essential for option pricing and risk management. Recently, generative deep learning models, such as the Denoising Diffusion Probabilistic Model (DDPM), have gained popularity for generating IVS. However, these machine-generated surfaces do n ...

Multi Target XGBoost Cash Flow Prediction

An Efficient Machine Learning Algorithm For Future Liability Projections

Insurers are required to have buffers to be able to meet financial obligations that result from their portfolios, which are determined using a cash flow model. The input of such a cash flow model consists among of things, of two mortality tables and the portfolio of an insurer. M ...
The computation of multivariate expectations is a common task in various fields related to probability theory. This thesis aims to develop a generic and efficient solver for multivariate expectation problems, with a focus on its application in the field of quantitative finance, s ...
In this research a new method for pricing continuous Arithmetic averaged Asian options is proposed. The computation is based on Fourier-cosine expansion, namely the COS method. Therefore, we derive the characteristic function of Integrated Geometric Brownian Motion based on Bouge ...
This thesis investigates the estimation of option-implied probability density functions for inflation using inflation options, focusing not only on the expected value but the whole distribution. The aim is to identify the most effective method for measuring the market expectation ...

Electrical energy storage scheduling

Short-term scheduling for the intraday market using stochastic programming

The global push for renewable energy faces challenges due to the unpredictable and inconsistent nature of wind and solar sources. These inherent characteristics of renewable energy sources add volatility to the electricity markets. In response, electrical energy storage (EES) eme ...
Barrier options, although highly liquid financial derivatives, present notable pricing challenges. In this thesis, we present a novel pricing approach for valuing continuously-monitored knock-out barrier options within the framework of stochastic volatility models.

The u ...
The EAD metric is widely used in the calculations for the capital requirements concerning Counterparty Credit Risk (CCR). In this thesis we compare several methods for calculating this EAD. Basel III gives us two methods, the Standardized Approach for CCR (SA-CCR) and the Interna ...
American option pricing has been an active research area in financial engineering over the past few decades. Since no analytic closed-form solution exists, various numerical approaches have been developed. Among all proposed methods, the least square Monte Carlo(LSMC) approach is ...

Energy Study of Drying

Using Machine Learning to Predict the Energy Consumption of an Industrial Powder Drying Process

In this thesis, we use data science / statistical techniques to better understand the energy consumption behind a powder drying facility located in Zwolle, as part of Abbott's initiative to better manage its energy consumption. As powder drying is by far the facility's most energ ...

Improving data quality is of the utmost importance for any data-driven company, as data quality is unmistakably tied to business analytics and processes. One method to improve upon data quality is to restore missing and wrong data entries. 

Improving data quality is of the utmost importance for any data-driven company, as data quality is unmistakably tied to business analytics and processes. One method to improve upon data quality is to restore missing and wrong data entries. 

The goal of this research is construct an algorithm such that it is possible to restore missing and wrong data entries, while making use of a human adaptive framework. This algorithm has been constructed in a modular fashion and consists of three main modules: Data Transformation, Data Structure Analysis and Model Selection. Data Transformation has concerned itself with conversion of raw data to data types and forms the other modules can use.

Data Structure Analysis has been designed to deal with correctly missing data and dichotomy in the target feature by making use of three clustering algorithms: DBSCAN, K-Means and Diffusion Maps. DBSCAN is used to determine the necessity of clustering as well as the initialisation of the K-Means algorithm. K-Means and Diffusion Maps have been used as clustering methods in the one-dimensional target feature and the two-dimensional input-target feature pairs, respectively. Data Structure Analysis has further been designed to perform feature selection through three filter methods: CorrCoef, FCBF and Treelet.

Model Selection has proposed a novel approach to selection of the best model of a candidate set through the optimisation of a conditional model ranking strategy based on the prior construction of theoretical testing. Our candidate set consisted of Expectation Maximisation, K-Means, Multi-Layer Perceptron, Nearest Neighbor, Random Forest, Linear Regression, Polynomial Regression, ElasticNet Regression.

In terms of restorability, it was shown that the optimal configuration of the Cleansing Algorithm for the restoration of missing data, was provided by opting not to use clustering, using a custom alteration to the Treelet algorithm for feature selection and making use of the model selection strategy. This not only lead to the greatest restorability of 56.90% on Aegon data sets, which was an improvement of 44.83% when compared to not using the Cleansing Algorithm, but also to the reduction of computation time by over 400%. A more realistic restorability due to the presence of correctly missing data, was given by the same configuration making use of one-dimensional output clustering. This resulted in a restorability on Aegon data sets of 43.10%. As such it was deemed possible to restore missing data on Aegon data sets.

With respect to the human adaptive framework, it was determined that the construction of the algorithm be modular in the sense that any alternate feature selection or clustering approach can be implemented with ease. Furthermore, the model selection module allows us to customize the theoretical testing and choice of regression or classification models for the restoration of missing data. In doing so, the algorithm has laid the foundations for human adaptivity of the Cleansing Algorithm.

A wide range of practical problems involve computing multi-dimensional integrations. However, in most cases, it is hard to find analytical solutions to these multi-dimensional integrations. Their numerical solutions always suffer from the `curse of dimension', which means the com ...
Interbank-offered-rates play a critical role in the hedging processes of banks, hedge funds or institutional investors. However, the financial stability board recommended to replace these rates by alternative risk-free-rates at the end of 2021. The new rates will be backward-look ...
The VIX index, which is the expected volatility of the S&P 500 index in 30 days, is of interest to a lot of investors on the US financial market. Allowing the volatility of the financial market to be used as a trading tool gives rise to interesting investment opportunities, s ...
Computing portfolio credit losses and associated risk sensitivities is crucial for the financial industry to help guard against unexpected events. Quantitative models play an instrumental role to this end. As a direct consequence of their probabilistic nature, portfolio losses ar ...