We present a low-cost, camera-based tactile sensor that leverages the photoelastic effect—interference fringes that appear under stress—to estimate contact force, position, and shape. Each fringe image is recorded at 50 Hz and processed by a multi-task neural network that predict
...
We present a low-cost, camera-based tactile sensor that leverages the photoelastic effect—interference fringes that appear under stress—to estimate contact force, position, and shape. Each fringe image is recorded at 50 Hz and processed by a multi-task neural network that predicts (i) the normal force (Fz ), (ii) the 2D contact location (x, y), and (iii) the shape class of the object. Two sensor variants were developed: Sensor 1, a layered design with fewer visible fringes, and Sensor 2, an integrated structure with improved fringe clarity. Both were evaluated using a ResNet-18 and a lightweight custom CNN, under three augmentation pipelines: grayscale images with 10 noisy augmented samples each, RGB images with 3 noisy augmentations, and RGB images with 3 clean (noise-free) augmentations. The base dataset includes nearly 15,000 synchronised samples of high-frequency fringe images and force signals. With augmentation, this was expanded to around 45,000 or 150,000 samples depending on the pipeline. The best results were achieved using Sensor 1 and ResNet-18 trained on grayscale images with 10 augmentations per input image. This configuration yielded a force MSE of 0.0213 N2, a contactpoint RMSE of 0.4462 mm, and 96.24% shape classification accuracy. Notably, even RGB images with only three augmentations per sample reached similar performance levels. These findings highlight that full-colour input and lightweight augmentation remain effective for accurate, scalable tactile sensing. Our modular learning pipeline generalises across sensor variants and data regimes, enabling robust, highfrequency tactile inference suitable for real-world deployment.