LMI-based Stability Analysis for Learning Control

Deep Neural Networks and Locally Weighted Learning

More Info
expand_more

Abstract

Learning capabilities are a key requisite for an autonomous agent operating in dynamically changing and complex environments, where pre-programming is not anymore possible. Furthermore, it is essential to guarantee that the learning agent will act safely by considering its stability properties. In this thesis, novel conditions are proposed, aiming to examine stability of the learned dynamics for two important model classes; namely Rectified Linear Unit (ReLU) Deep Neural Networks (DNNs) and Locally Weighted Learning (LWL). For the former method, a theoretical and computational framework is developed by establishing an equivalence between ReLU DNN models and Piecewise Affine (PWA) systems. This allows to leverage well-known tools of PWA system analysis, and consequently compute, characterize equilibria and determine their region of attraction for ReLU DNNs. Due to their increased complexity, a structured search for appropriate stability conditions was performed for LWL methods until the optimal trade-off between conservativeness and computational efficiency was obtained. These stability conditions are given as Linear Matrix Inequality (LMI) problems and they consist the first stability results in literature for these two model classes. Their efficacy is assessed in numerical and real-world dynamical systems and it is shown that the proposed LMIs are not unreasonably conservative, as they can evaluate accurately the stability properties of these two representations. Finally, this work demonstrates how to formulate appropriate stability conditions for learning methods in a principled manner.