Accelerating process synthesis with reinforcement learning

Transfer learning from multi-fidelity simulations and variational autoencoders

Journal Article (2025)
Author(s)

Qinghe Gao (TU Delft - ChemE/Process Systems Engineering)

Haoyu Yang (Student TU Delft)

Maximilian F. Theisen (Student TU Delft)

A.M. Schweidtmanna (TU Delft - ChemE/Process Systems Engineering)

Research Group
ChemE/Process Systems Engineering
DOI related publication
https://doi.org/10.1016/j.compchemeng.2025.109192
More Info
expand_more
Publication Year
2025
Language
English
Research Group
ChemE/Process Systems Engineering
Volume number
201
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Reinforcement learning has shown some success in automating process design by integrating data-driven models that interact with process simulators to learn to build process flowsheets iteratively. However, one major challenge in the learning process is that the reinforcement learning agent demands numerous process simulations in rigorous process simulators, thereby requiring long simulation times and expensive computational power. We propose employing transfer learning to enhance the reinforcement learning process in process design. This study examines two transfer learning strategies: (i) transferring knowledge from shortcut process simulators to rigorous simulators, and (ii) transferring knowledge from process variational autoencoders (VAEs). Our findings reveal that appropriate transfer learning can significantly improve both learning efficiency and convergence scores. However, transfer learning can also negatively impact the learning process when there are substantial discrepancies in decision range and reward function. This suggests that pre-trained process data should match the complexity of the fine-tuning task.