RL-Guided MPC for Autonomous Greenhouse Control

Journal Article (2025)
Author(s)

Salim Msaad (TU Delft - Team Koty McAllister)

Murray Harraway (Student TU Delft)

Robert D. Mcallister (TU Delft - Team Koty McAllister)

Research Group
Team Koty McAllister
DOI related publication
https://doi.org/10.1016/j.ifacol.2025.11.829
More Info
expand_more
Publication Year
2025
Language
English
Research Group
Team Koty McAllister
Issue number
23
Volume number
59
Pages (from-to)
449-454
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The efficient operation of greenhouses is essential for enhancing crop yield while minimizing energy costs. This paper investigates a control strategy that integrates Reinforcement Learning (RL) and Model Predictive Control (MPC) to optimize economic benefits in autonomous greenhouses. Previous research has explored the use of RL and MPC for greenhouse control individually, or by using MPC as the function approximator for the RL agent. This study introduces the RL-Guided MPC framework, where a RL policy is trained and then used to construct a terminal cost and terminal region constraint for the MPC optimization problem. This approach leverages the ability to handle uncertainties of RL with MPC's online optimization to improve overall control performance. The RL-Guided MPC framework is compared with both MPC and RL via numerical simulations. Two scenarios are considered: a deterministic environment and an uncertain environment. Simulation results demonstrate that, in both environments, RL-Guided MPC outperforms both RL and MPC with shorter prediction horizons.