Deep Deterministic Policy Gradient for High-Speed Train Trajectory Optimization

More Info
expand_more

Abstract

This paper proposes a novel train trajectory optimization approach for high-speed railways. We restrict our attention to single train operation scenarios with different scheduled/rescheduled running times aiming at generating optimal train recommended trajectories in real time, which can ensure punctuality and energy efficiency of train operation. A learning-based approach deep deterministic policy gradient (DDPG) is designed to generate optimal train trajectories based on the offline training from the interaction between the agent and the trajectory simulation environment. An allocating running time and selecting operation modes (ARTSOM) algorithm is proposed to improve train punctuality and give a series of discrete operation modes (full traction, cruising, coasting, full braking), and thus to produce a feasible training set for DDPG, which can speed up the training process. Numerical experiments show that an optimized speed profile can be generated by DDPG within seconds on a realistic railway line. In addition, the results demonstrate the generalization ability of trained DDPG in solving TTO problems with different running times and line conditions.