Deep Reinforcement Learning for Active Wake Control

More Info
expand_more

Abstract

Wind farms suffer from so-called wake effects: when turbines are located in the wind shadows of other turbines, their power output is substantially reduced. These losses can be partially mitigated via actively changing the yaw from the individually optimal direction. Most existing wake control techniques have two major limitations: they use simplified wake models to optimize the control strategy, and they assume that the atmospheric conditions remain stable. In this paper, we address these limitations by applying reinforcement learning (RL). RL forgoes the wake model entirely and learns an optimal control strategy based on the observed atmospheric conditions and a reward signal, in this case the power output of the farm. It also accounts for random transitions in the observations, such as turbulent fluctuations in the wind. To evaluate RL for active wake control, we provide a simulator based on the state-of-the-art FLORIS model in the OpenAI gym format. Next, we propose three different state-action representations of the active wake control problem and investigate their effect on the performance of RL-based wake control. Finally, we compare RL to a state-of-the-art wake control strategy based on FLORIS and show that RL is less sensitive to changes in unobservable data.