A Hybrid Spatial-temporal Sequence-to-one Neural Network Model for Lane Detection

More Info
expand_more

Abstract

Reliable and accurate lane detection is of vital importance for the safe performance of Lane Keeping Assistance and Lane Departure Warning systems. However, under certain challenging peculiar circumstances (e.g., marking degradation, serious vehicle occlusion), it is quite difficult to get satisfactory performance in accurately detecting the lane markings from one single image which is often the case in current literature. Since road markings are continuous lines on the road, the lanes that are difficult to be accurately detected in the current image frame might potentially be better inferred out by incorporating information from previous frames. To tackle the challenging scenes, we propose a novel hybrid spatial-temporal sequence-to-one deep learning architecture making full use of the spatial-temporal information in multiple frames of a continuous sequence of images for detecting lane markings in the very last current image frame. Specifically, the hybrid model integrates the spatial convolutional neural network (SCNN), which is powerful in extracting spatial features and relationships in one single image, with convolutional long-short term memory (ConvLSTM) neural network, which can capture the (spatial-)temporal correlations and time dependencies between the image sequences. In this way, the advantages of both SCNN and ConvLSTM are fully combined and the spatial-temporal information is fully exploited. Treating lane detection as the image segmentation problem, we applied encoder-decoder structures to make it work in an end-to-end way. Extensive experiments on two large-scale datasets reveal that our proposed model with the corresponding training strategy can effectively handle challenging driving scenes and outperforms previous methods.