This paper proposes a reinforcement learning approach to the output synchronization problem for heterogeneous leader-follower multi-agent systems, where the system dynamics of all agents are completely unknown. First, to solve the challenge caused by unknown dynamics of the leade
...
This paper proposes a reinforcement learning approach to the output synchronization problem for heterogeneous leader-follower multi-agent systems, where the system dynamics of all agents are completely unknown. First, to solve the challenge caused by unknown dynamics of the leader, we develop an experience-replay learning method to estimate the leader’s dynamics, which only uses the leader’s past state and output information as training data. Second, based on the newly estimated leader’s dynamics, we design an event-triggered observer for each follower to estimate the leader’s state and output. Furthermore, the experience-replay learning method and the event-triggered leader observer are co-designed, which ensures the convergence and Zeno behavior exclusion. Subsequently, to free the followers from reliance on system dynamics, a data-driven adaptive dynamic programming (ADP) method is presented to iteratively derive the optimal control gains, based on which we design a policy iteration (PI) algorithm for output synchronization. Finally, the proposed algorithm’s performance is validated through a simulation.