Explainable Neural Networks for Incipient Slip Sensing in Robot Tactile Learning

More Info
expand_more

Abstract

Incipient slip detection plays an important role in human and robotic grasping. With the growing use of deep learning in vision-based tactile sensing, the black-box nature of these deep neural networks (DNNs) makes it difficult to analyze, debug, and validate their behavior and learned patterns. To fill this gap, eXplainable AI (XAI) methods have been introduced to shed light into the DNN’s reasoning regarding incipient slip detection. These methods generate saliency maps, highlighting the relevant regions in the input tactile image that resulted in the predicted degree of incipient slip. Temporal difference images have been
used to enhance the visualization of incipient slip and make saliency maps easier for human viewers to understand. Additionally, this research evaluates several XAI methods based on criteria such as high-resolution, smoothness, and faithfulness. The experiment examined 42 samples from the ChromaTouch tactile dataset, focusing on contact interactions with a flat object. The results showed that Poly-CAM satisfies all three criteria by accurately highlighting markers while emphasizing their relative importance in the DNN’s decision-making process. Overall, through visual analysis of saliency maps, our findings confirm that DNNs have successfully learned to localize crucial deformation features for detecting incipient slip.

Files

Explainable_Neural_Networks_fo... (pdf)
Unknown license
warning

File under embargo until 31-08-2025