This paper studies road user trajectory prediction in mixed traffic, i.e. where vehicles and Vulnerable Road Users (VRUs, i.e. pedestrians, cyclists and other riders) closely share a common road space. We investigate if typical prediction components (scene graph representation, s
...
This paper studies road user trajectory prediction in mixed traffic, i.e. where vehicles and Vulnerable Road Users (VRUs, i.e. pedestrians, cyclists and other riders) closely share a common road space. We investigate if typical prediction components (scene graph representation, scene encoding, waypoint prediction, motion dynamics) should be specific to each road user class. Using the recent VRU-heavy View-of-Delft Prediction (VoD-P) dataset, we study several directions to improve the performance of the state-of-the-art map-based prediction models (PGP, TNT) in urban settings. First, we consider the use of class-specific map representations. Second, we investigate if the weights of different components of the model should be shared or separated by class. Finally, we augment VoD-P training data with automatically extracted trajectories from the 360-degree LiDAR scans by the recording vehicle. This data is made publicly available. We find that pre-training the model on auto-labels and making it class-specific leads to a reduction of up to 22.2%, 20.0%, and 18.2% in minADE (K = 10 samples) for pedestrians, cyclists, and vehicles, respectively.