Graph convolution reinforcement learning for active wake control in windfarms

Application of a multi-agent reinforcement learning algorithm

More Info
expand_more

Abstract

Wind energy, generated by windfarms, is playing an increasingly critical role in meeting current and future energy demands. windfarms, however, face a challenge due to the inherent flaw of wake-induced power losses when turbines are located in close proximity. Wakes, characterized by regions of turbulence and lower wind speed, are created as air passes through the rotors, reducing the efficiency of downstream turbines. Wake losses can be reduced by yawing upstream turbines to steer the wake away from downstream turbines. While there are power losses associated with turning turbines off-wind, the gains in the subsequent turbines can outweigh these losses. Yawing turbines to increase overall power output is known as Active Wake Control, and the literature shows that single-agent reinforcement learning algorithms can be used to learn such control policies. However, these approaches are limited to a small number of turbines and do not scale well to larger windfarms. Multi agent reinforcement learning algorithms do scale to larger windfarms and this paper investigates the application of the DGN algorithm for windfarm active wake control. DGN is a fully cooperative algorithm that utilizes a graph representation of agents to encourage collaboration among neighboring turbines. The use of DGN is particularly interesting because windfarms naturally have a topological structure, and depending on the modeling choices, graphs can capture a significant amount of information. This paper demonstrates that DGN can learn useful policies for wake control. Although it does not outperform the DQN single-agent algorithm on small windfarms, its advantage becomes apparent in larger windfarms, where its performance remains consistent while DQN’s performance deteriorates.