Distributed Actor-Critic Algorithms for Multiagent Reinforcement Learning Over Directed Graphs

More Info
expand_more

Abstract

Actor-critic (AC) cooperative multiagent reinforcement learning (MARL) over directed graphs is studied in this article. The goal of the agents in MARL is to maximize the globally averaged return in a distributed way, i.e., each agent can only exchange information with its neighboring agents. AC methods proposed in the literature require the communication graphs to be undirected and the weight matrices to be doubly stochastic (more precisely, the weight matrices are row stochastic and their expectation are column stochastic). Differently from these methods, we propose a distributed AC algorithm for MARL over directed graph with fixed topology that only requires the weight matrix to be row stochastic. Then, we also study the MARL over directed graphs (possibly not connected) with changing topologies, proposing a different distributed AC algorithm based on the push-sum protocol that only requires the weight matrices to be column stochastic. Convergence of the proposed algorithms is proven for linear function approximation of the action value function. Simulations are presented to demonstrate the effectiveness of the proposed algorithms.

Files

Distributed_Actor_Critic_Algor... (pdf)
(pdf | 1.98 Mb)
Unknown license

Download not available

Distributed_ActorCritic_Algori... (pdf)
(pdf | 1.74 Mb)
- Embargo expired in 11-07-2022
Unknown license