Integrating MPC and RL for Efficient Control of Autonomous Vehicles
More Info
expand_more
Abstract
Autonomous vehicles offer significant potential for improving traffic efficiency and reducing
fuel consumption, with Model Predictive Control (MPC) being widely used due to its ability
to guarantee constraint satisfaction and safety while providing optimal control performance.
However, car models traditionally used in MPC approaches for vehicle control often overlooks
discrete dynamics like gear changes, which are critical for optimizing vehicle fuel consumption. Advancements have incorporated these discrete dynamics into MPC, resulting in a hybrid model that considers both continuous and discrete dynamics. The incorporation of
the fuel model, along with these discrete dynamics, significantly increases the computational
complexity of the MPC problem, making real-time implementation challenging. To address
this issue, Reinforcement Learning (RL) can be leveraged to simplify the optimization problem by learning policies that determine key discrete components, such as gear selection. This
allows the MPC controller to handle a simpler optimization problem, thereby reducing the
computational burden and enabling real-time control. This research aims to propose a new
approach to integrate RL and MPC for vehicle control, where RL is used to manage gear
transitions and MPC controls the overall vehicle dynamics, offering a computationally efficient solution, while achieving near optimal performance comparable to the conventional
MPC approach.