Adaptive Event-Triggered Output Synchronization of Heterogeneous Multiagent Systems
A Model-Free Reinforcement Learning Approach
Wenfeng Hu (Central South University China)
Xuan Wang (Central South University China)
Meichen Guo (TU Delft - Team Meichen Guo)
Biao Luo (Central South University China)
Tingwen Huang (Shenzhen University of Advanced Technology)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This paper proposes a reinforcement learning approach to the output synchronization problem for heterogeneous leader-follower multi-agent systems, where the system dynamics of all agents are completely unknown. First, to solve the challenge caused by unknown dynamics of the leader, we develop an experience-replay learning method to estimate the leader’s dynamics, which only uses the leader’s past state and output information as training data. Second, based on the newly estimated leader’s dynamics, we design an event-triggered observer for each follower to estimate the leader’s state and output. Furthermore, the experience-replay learning method and the event-triggered leader observer are co-designed, which ensures the convergence and Zeno behavior exclusion. Subsequently, to free the followers from reliance on system dynamics, a data-driven adaptive dynamic programming (ADP) method is presented to iteratively derive the optimal control gains, based on which we design a policy iteration (PI) algorithm for output synchronization. Finally, the proposed algorithm’s performance is validated through a simulation.
Files
File under embargo until 11-12-2025