Replanning Strategies for Synchromodal Resilience

A Deep Reinforcement Learning Approach to Real-Time Logistics Optimization

Master Thesis (2025)
Author(s)

T.C. Schoonderbeek (TU Delft - Civil Engineering & Geosciences)

Contributor(s)

M. Saeednia – Mentor (TU Delft - Transport, Mobility and Logistics)

B. Atasoy – Mentor (TU Delft - Transport Engineering and Logistics)

Faculty
Civil Engineering & Geosciences
More Info
expand_more
Publication Year
2025
Language
English
Graduation Date
08-08-2025
Awarding Institution
Delft University of Technology
Programme
['Transport, Infrastructure and Logistics']
Faculty
Civil Engineering & Geosciences
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This study develops a synchromodal transport model with real-time replanning using a Double-Dueling Deep Q-Network (D3QN) agent. The hinterland port-hinterland leg dominates supply-chain costs and remains vulnerable to disruptions. We simulate realistic disruptions using varying occurrence and severity levels and train a D3QN agent to decide whether an affected shipment should wait or reassign to another mode. Model performance is evaluated in a Rhine-Alpine corridor case study against two baselines strategies (Always Wait, Always Reassign) and a tabular Q-learning benchmark. The results show that the D3QN policy achieves the lowest costs on a combined set of disruptions, outperforming alternatives overall. In particular, D3QN excels under high-occurrence, low-severity disruptions but appears less effective under rare, high-severity events. The integration of the D3QN with parallel simulation greatly sped up training (roughly 4 times faster convergence) without degrading policy quality. It is discussed how classifying disruptions by occurrence and severity helps tractability, and show that the D3QN learns to reassign shipments more effectively. The findings indicate that deep reinforcement learning can improve synchromodal resilience, particularly in large-scale settings where conventional RL or heuristics fail.

Files

License info not available