Searched for: author%3A%22Batselier%2C+K.%22
(1 - 16 of 16)
document
Batselier, K. (author), Ko, Ching Yun (author), Wong, Ngai (author)
This article reformulates the multiple-input-multiple-output Volterra system identification problem as an extended Kalman filtering problem. This reformulation has two advantages. First, it results in a simplification of the solution compared to the Tensor Network Kalman filter as no tensor filtering equations are required anymore. The second...
conference paper 2019
document
Chen, Cong (author), Batselier, K. (author), Ko, Ching Yun (author), Wong, Ngai (author)
A restricted Boltzmann machine (RBM) learns a probability distribution over its input samples and has numerous uses like dimensionality reduction, classification and generative modeling. Conventional RBMs accept vectorized data that dismiss potentially important structural information in the original tensor (multi-way) input. Matrix-variate...
conference paper 2019
document
Chen, Cong (author), Batselier, K. (author), Ko, Ching Yun (author), Wong, Ngai (author)
There has been growing interest in extending traditional vector-based machine learning techniques to their tensor forms. Support tensor machine (STM) and support Tucker machine (STuM) are two typical tensor generalization of the conventional support vector machine (SVM). However, the expressive power of STM is restrictive due to its rank-one...
conference paper 2019
document
Gedon, Daniel (author), Piscaer, P.J. (author), Batselier, K. (author), Smith, C.S. (author), Verhaegen, M.H.G. (author)
An extension of the Tensor Network (TN) Kalman filter [2], [3] for large scale LTI systems is presented in this paper. The TN Kalman filter can handle exponentially large state vectors without constructing them explicitly. In order to have efficient algebraic operations, a low TN rank is required. We exploit the possibility to approximate the...
conference paper 2019
document
Ko, Ching Yun (author), Batselier, K. (author), Daniel, Luca (author), Yu, Wenjian (author), Wong, Ngai (author)
We propose a new tensor completion method based on tensor trains. The to-be-completed tensor is modeled as a low-rank tensor train, where we use the known tensor entries and their coordinates to update the tensor train. A novel tensor train initialization procedure is proposed specifically for image and video completion, which is demonstrated...
journal article 2020
document
Ko, Ching Yun (author), Chen, Cong (author), He, Zhuolun (author), Zhang, Yuke (author), Batselier, K. (author), Wong, Ngai (author)
Sum-product networks (SPNs) constitute an emerging class of neural networks with clear probabilistic semantics and superior inference speed over other graphical models. This brief reveals an important connection between SPNs and tensor trains (TTs), leading to a new canonical form which we call tensor SPNs (tSPNs). Specifically, we...
journal article 2020
document
Karagoz, Ridvan (author), Batselier, K. (author)
This article introduces the Tensor Network B-spline (TNBS) model for the regularized identification of nonlinear systems using a nonlinear autoregressive exogenous (NARX) approach. Tensor network theory is used to alleviate the curse of dimensionality of multivariate B-splines by representing the high-dimensional weight tensor as a low-rank...
journal article 2020
document
Batselier, K. (author)
The estimation of an exponential number of model parameters in a truncated Volterra model can be circumvented by using a low-rank tensor decomposition approach. This low-rank property of the tensor decomposition can be interpreted as the assumption that all Volterra parameters are structured. In this article, we investigate whether it is...
journal article 2021
document
Batselier, K. (author), Cichocki, Andrzej (author), Wong, Ngai (author)
In this article, two new algorithms are presented that convert a given data tensor train into either a Tucker decomposition with orthogonal matrix factors or a multi-scale entanglement renormalization ansatz (MERA). The Tucker core tensor is never explicitly computed but stored as a tensor train instead, resulting in both computationally and...
journal article 2021
document
Batselier, K. (author)
Nonlinear parametric system identification is the estimation of nonlinear models of dynamical systems from measured data. Nonlinear models are parameterized, and it is exactly these parameters that must be estimated. Extending familiar linear models to their nonlinear counterparts quickly leads to practical problems. For example, the...
journal article 2022
document
Li, Lingjie (author), Yu, Wenjian (author), Batselier, K. (author)
In recent years, the application of tensors has become more widespread in fields that involve data analytics and numerical computation. Due to the explosive growth of data, low-rank tensor decompositions have become a powerful tool to harness the notorious curse of dimensionality. The main forms of tensor decomposition include CP...
journal article 2022
document
Chen, Cong (author), Batselier, K. (author), Yu, Wenjian (author), Wong, Ngai (author)
Tensor, a multi-dimensional data structure, has been exploited recently in the machine learning community. Traditional machine learning approaches are vector- or matrix-based, and cannot handle tensorial data directly. In this paper, we propose a tensor train (TT)-based kernel technique for the first time, and apply it to the conventional...
journal article 2022
document
Menzen, C.M. (author), Kok, M. (author), Batselier, K. (author)
Multiway data often naturally occurs in a tensorial format which can be approximately represented by a low-rank tensor decomposition. This is useful because complexity can be significantly reduced and the treatment of large-scale data sets can be facilitated. In this paper, we find a low-rank representation for a given tensor by solving a...
journal article 2022
document
de Rooij, S.J.S. (author), Batselier, K. (author), Hunyadi, Borbala (author)
Recent advancements in wearable EEG devices have highlighted the importance of accurate seizure detection algorithms, yet the ever-increasing size of the generated datasets poses a significant challenge to existing seizure detection methods based on kernel machines. Typically, this problem is mitigated by significantly undersampling the...
conference paper 2023
document
Wesel, F. (author), Batselier, K. (author)
Kernel machines are one of the most studied family of methods in machine learning. In the exact setting, training requires to instantiate the kernel matrix, thereby prohibiting their application to large-sampled data. One popular kernel approximation strategy which allows to tackle large-sampled data consists in interpolating product kernels...
journal article 2023
document
Wesel, F. (author), Batselier, K. (author)
In the context of kernel machines, polynomial and Fourier features are commonly used to provide a nonlinear extension to linear models by mapping the data to a higher-dimensional space. Unless one considers the dual formulation of the learning problem, which renders exact large-scale learning unfeasible, the exponential increase of model...
journal article 2024
Searched for: author%3A%22Batselier%2C+K.%22
(1 - 16 of 16)