Deep reinforcement learning for active flow control in a turbulent separation bubble

Journal Article (2025)
Author(s)

Bernat Font (TU Delft - Ship Hydromechanics)

Francisco Alcántara-Ávila (KTH Royal Institute of Technology)

Jean Rabault (Independent researcher)

Ricardo Vinuesa (KTH Royal Institute of Technology)

Oriol Lehmkuhl (Barcelona Supercomputing Center)

Research Group
Ship Hydromechanics
DOI related publication
https://doi.org/10.1038/s41467-025-56408-6
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Ship Hydromechanics
Issue number
1
Volume number
16
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The control efficacy of deep reinforcement learning (DRL) compared with classical periodic forcing is numerically assessed for a turbulent separation bubble (TSB). We show that a control strategy learned on a coarse grid works on a fine grid as long as the coarse grid captures main flow features. This allows to significantly reduce the computational cost of DRL training in a turbulent-flow environment. On the fine grid, the periodic control is able to reduce the TSB area by 6.8%, while the DRL-based control achieves 9.0% reduction. Furthermore, the DRL agent provides a smoother control strategy while conserving momentum instantaneously. The physical analysis of the DRL control strategy reveals the production of large-scale counter-rotating vortices by adjacent actuator pairs. It is shown that the DRL agent acts on a wide range of frequencies to sustain these vortices in time. Last, we also introduce our computational fluid dynamics and DRL open-source framework suited for the next generation of exascale computing machines