A Data-driven approach to softness estimation using tactile sensing

More Info
expand_more

Abstract

The accurate prediction of object softness is crucial in many fields, from agriculture to medical care. Vision-based tactile sensors, which capture high-resolution images of contact interactions, have shown great potential in determining this material property. Many existing approaches, particularly those using end-to-end models, suffer from a 'black-box' problem where it is difficult to understand which features the models use to make their predictions. This lack of transparency makes it challenging to determine and correct errors. To overcome this, this paper shows a data-driven method that can decode information from acquired tactile images to extract pressure distributions and assign softness levels to different objects. A novel approach is explored, integrating a Convolutional Neural Network (CNN) to predict the pressure distribution and a Long Short-Term Memory (LSTM) network to assess material softness. It is demonstrated that the CNN model effectively learns necessary features from the tactile images, enabling precise pressure distribution predictions. Concurrently, the LSTM model analyzes temporal sequences of tactile data, accurately predicting material softness and differentiating ripe from overripe fruits. By utilizing the spatiotemporal pressure distribution, this method improves on existing methods by enabling the efficient use of tactile data and providing additional information that can be used to further enhance the model. This paper can be used as a stepping stone to a more complex system in which robotic control can be implemented based on the sensed material properties, allowing for better control loop mechanisms and expanding the applications of tactile sensing technologies.