Explanatory Generative Trajectory Prediction via Weak Preference Supervision
More Info
expand_more
Abstract
For trajectory prediction within autonomous vehicle planning and control, conditional variational autoencoders (CVAEs) have shown promise in accurate and diverse modeling of agent behaviors. Besides accuracy, explainability is also crucial for the safety and acceptance of learning-based autonomous systems, especially in autonomous driving. However, the latent distributions learned by CVAE models are often implicit and thus possess low explainability. To address this, we propose a semi-supervised generative modeling framework, \textbf{\textit{PrefCVAE}}, which utilizes partially and weakly labelled preference pairs to imbue the CVAE's latent representation with semantic meaning. This approach enables the system to estimate measurable attributes of the agents, and to generate manipulable predictions under the CVAE framework. Results show that incorporating our preference loss allows a CVAE-based model to make conditional predictions using the semantic factor of prediction average velocity. Our augmented framework also does not significantly degrade the baseline accuracy of prediction. Additionally, we show that the latent values learned with PrefCVAE better represent the semantic information contained in the data. Finally, we discuss the potential of this loss design to extend to other machine learning applications beyond trajectory prediction, as well as essential tricks for adaptation of human labeling. We hope that our empirical study offers the broader representation learning community a fresh perspective on inductive bias for disentangled and explainable latent representations in deep generative models. Specifically, we demonstrate that preference pair supervision, a simple and cost-effective approach, can effectively aid in learning semantic meanings for sampling-based generative models like the CVAE.