Autonomous planetary rovers require obstacle detection capabilities to navigate hazardous terrain without Earth-based intervention, yet currently deployed methods are limited. This research develops and validates a complete deep learning system for autonomous rock detection on a
...
Autonomous planetary rovers require obstacle detection capabilities to navigate hazardous terrain without Earth-based intervention, yet currently deployed methods are limited. This research develops and validates a complete deep learning system for autonomous rock detection on a resource-constrained planetary rover.
We present a lightweight MobileNetV2-based U-Net architecture with dual attention mechanisms, optimized for edge deployment with only 0.31 million parameters. A new dataset MarsTanYard was created via a semi-automated dataset creation pipeline, enabling efficient annotation of Mars-analogue terrain imagery. Our deep learning network was trained on this dataset and integrated with a ROS2 navigation stack through a modular architecture that transforms segmentation masks into 2D occupancy grids that can be used for rover path planning. Our network achieves a 77% intersection-over-union accuracy for rock segmentation, and physical validation on a testing rover in a Mars analogue environment demonstrates a 94% detection rate for large rocks at close-range. An inference time of 4.49ms was achieved on the target rover hardware using model optimization techniques. The system maintains reliable operation across varying lighting conditions with less than 15% performance degradation.
Results show theoretical collision probability of 7.8 × 10^-7 per rock encounter, enabling months of autonomous operation for typical planetary missions. This work provides an end-to-end validation of the deep learning obstacle detection system, establishing a foundation for enhanced rover autonomy in future Mars exploration missions.