Cooperative lane-changing in mixed traffic
a deep reinforcement learning approach
X. Yao (TU Delft - Transport and Planning)
Zhaocheng Du (McGill University)
Zhanbo Sun (Southwest Jiaotong University)
Simeon C. Calvert (TU Delft - Transport and Planning)
Ang ji (Southwest Jiaotong University)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Deep Reinforcement Learning (DRL) has made remarkable progress in autonomous vehicle decision-making and execution control to improve traffic performance. This paper introduces a DRL-based mechanism for cooperative lane changing in mixed traffic (CLCMT) for connected and automated vehicles (CAVs). The uncertainty of human-driven vehicles (HVs) and the microscopic interactions between HVs and CAVs are explicitly modelled, and different leader-follower compositions are considered in CLCMT, which provides a high-fidelity DRL learning environment. A feedback module is established to enable interactions between the decision-making layer and the manoeuvre control layer. Simulation results show that the increase in CAV penetration leads to safer, more comfort, and eco-friendly lane-changing behaviours. A CAV-CAV lane-changing scenario can enhance safety by 24.5%–35.8%, improve comfort by 8%–9%, and reduce fuel consumption and emissions by 5.2%–12.9%. The proposed CLCMT promises advantages in the lateral decision-making and motion control of CAVs.