Volterra Models
A Finite-Sample Complexity Bound
A.L. de Ruijter (TU Delft - Mechanical Engineering)
M. Khosravi – Mentor (TU Delft - Team Khosravi)
M.A. Sharifi Kolarijani – Graduation committee member (TU Delft - Team Amin Sharifi Kolarijani)
D. Liu – Graduation committee member (TU Delft - Team Khosravi)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This thesis addresses the significant gap in understanding the finite-sample performance of algorithms for learning nonlinear systems, specifically Volterra series, where existing literature predominantly relies on asymptotic analysis. We develop a novel framework termed Directional Probabilistic Excitation (dPE) to provide rigorous performance guarantees for Linear-In-Parameter models under mild assumptions on input excitation and system stability. Explicit, non-asymptotic complexity bounds are derived for learning Volterra series using Ordinary Least Squares, revealing that the minimal sample size scales linearly with the combinatorial model $D$ dimension, while the estimation error decays at a rate of $\mathcal{O}(\sqrt{D/N})$ under sub-Gaussian noise conditions. Furthermore, we demonstrate that this framework applies to polynomial NARMAX models via regularized least-squares, quantifying the additional statistical cost imposed by feedback loops and dependent noise. Numerical simulations validate the theoretical bounds, illustrating the critical influence of input excitation, noise robustness, and the curse of dimensionality on convergence rates. Ultimately, this work bridges the sharp finite-sample theory of linear systems with the expressive power of nonlinear Volterra models, offering a foundational statistical framework for fading memory nonlinear dynamical system learning.
Files
File under embargo until 27-02-2028