Towards Efficient Personalized Driver Behavior Modeling with Machine Unlearning

More Info
expand_more

Abstract

Driver Behavior Modeling (DBM) aims to predict and model human driving behaviors, which is typically incorporated into the Advanced Driver Assistance System to enhance transportation safety and improve driving experience. Inverse reinforcement learning (IRL) is a prevailing DBM technique with the goal of modeling the driving policy by recovering an unknown internal reward function from human driver demonstrations. However, the latest IRL-based design is inefficient due to the laborious manual feature engineering processes. Besides, the reward function usually experiences increased prediction errors when deployed for unseen vehicles. In this paper, we propose a novel deep learning-based reward function for IRL-based DBM with efficient model personalization via machine unlearning. We evaluate our approach on a highway simulation constructed using the realistic human driving dataset NGSIM. We deploy our approach on both a server GPU and an embedded GPU. The evaluation results show that our approach achieves a higher prediction accuracy compared with the latest IRL-based DBM approach that uses a weighted sum of trajectory features as the reward function. Our model personalization method obtains the highest accuracy and lowest latency compared with the baselines.