Batch Bayesian Learning of Large-Scale LS-SVMs Based on Low-rank Tensor Networks

Master Thesis (2021)
Author(s)

C. WANG (TU Delft - Mechanical Engineering)

Contributor(s)

Kim Batselier – Mentor (TU Delft - Team Kim Batselier)

Sander Wahls – Graduation committee member (TU Delft - Team Sander Wahls)

F. Wesel – Graduation committee member (TU Delft - Team Kim Batselier)

J.F.P. Kooij – Graduation committee member (TU Delft - Intelligent Vehicles)

Faculty
Mechanical Engineering
Copyright
© 2021 CHENXU WANG
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 CHENXU WANG
Graduation Date
07-07-2021
Awarding Institution
Delft University of Technology
Programme
Mechanical Engineering | Systems and Control
Faculty
Mechanical Engineering
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Least Squares Support Vector Machines (LS-SVMs) are state-of-the-art learning algorithms that have been widely used for pattern recognition. The solution for an LS-SVM is found by solving a system of linear equations, which involves the computational complexity of O(N^3). When datasets get larger, solving LS-SVM problems with standard methods becomes burdensome or even unfeasible. The Tensor Train (TT) decomposition provides an approach to representing data in highly compressed formats without loss of accuracy. By converting vectors and matrices in the TT format, the storage and computational requirements can be greatly reduced. In this thesis, we develop a Bayesian learning method in the TT format to solve large-scale LS-SVM problems, which involves the computation of a matrix inverse. This method allows us to include the information we know about the model parameters in the prior distribution. As a result, we are able to obtain a probability distribution of the parameters, which enables us to construct confidence levels of the predictions. In the numerical experiment, we show that the developed method performs competitively with the current methods.

Files

MSc_Thesis_Chenxu_Wang.pdf
(pdf | 3.44 Mb)
License info not available