Representer Theorem for Learning Koopman Operators

More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 M. Khosravi
Research Group
Team Khosravi
Issue number
5
Volume number
68
Pages (from-to)
2995-3010
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In this work, we consider the problem of learning the Koopman operator for discrete-time autonomous systems. The learning problem is formulated as a generic constrained regularized empirical loss minimization in the infinite-dimensional space of linear operators. We show that a representer theorem holds for the introduced learning problem under certain but general conditions, which allows convex reformulation of the problem in a specific finite-dimensional space without any approximation and loss of precision. We discuss the inclusion of various forms of regularization and constraints in the learning problem, such as the operator norm, the Frobenius norm, the operator rank, the nuclear norm, and the stability. Subsequently, we derive the corresponding equivalent finite-dimensional problem. Furthermore, we demonstrate the connection between the proposed formulation and the extended dynamic mode decomposition. We present several numerical examples to illustrate the theoretical results and verify the performance of regularized learning of the Koopman operators.

Files

Representer_Theorem_for_Learni... (pdf)
(pdf | 1.85 Mb)
- Embargo expired in 06-08-2023
License info not available