Friedkin-Johnsen Model Is Distributed Gradient Descent
Orhan Eren Akgün (Harvard University)
Aron Vékássy (Harvard University)
Luca Ballotta (TU Delft - Team Riccardo Ferrari)
Michal Yemini (Bar-Ilan University)
Stephanie Gil (Harvard University)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
The Friedkin-Johnsen (FJ) model describes how agents adjust their opinions through repeated interactions while accounting for the influence of agents who are partially stubborn. In this work, we demonstrate that the FJ model is stepwise equivalent to solving the average consensus problem via distributed gradient descent. This perspective provides a unifying framework that bridges opinion dynamics and optimization, enabling the application of well-established results from the optimization literature. To illustrate this, we examine the recently proposed FJ model with diminishing stubbornness and extend prior results that were concerned with fixed communication graphs to time-varying and jointly connected communication graphs. We derive convergence guarantees and analyze convergence rates under these relaxed assumptions. Finally, we present numerical experiments on random graphs to showcase the impact of diminishing stubbornness dynamics on convergence in both static and time-varying settings.
Files
File under embargo until 20-12-2025