DreamTexture: Latent Diffusion Model for Psychophysical Feature-to-Texture Generation
Q.V. Begelinger (TU Delft - Mechanical Engineering)
Yasemin Vardar – Mentor (TU Delft - Human-Robot Interaction)
Christian Pek – Graduation committee member (TU Delft - Robot Dynamics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
Generative AI has revolutionized domains such as language, vision, and audio; yet, its application to the field of haptics, specifically signals for friction modulation devices, remains barely explored. A generative model could alleviate the issues associated with recording friction-based texture signals, such as the expenses of recording equipment and the limitation to lab environments, which significantly constrain the diversity of texture signals that can be rendered on friction modulation haptic devices. We propose a generative latent diffusion model called DreamTexture. The model is conditioned on a feature vector derived from a psychophysical perceptual space, where each dimension corresponds to an adjective pair (e.g., Rough–Smooth, Sticky–Slippery). We investigate whether DreamTexture can synthesize friction signals that align with users’ perceptual expectations, despite the subjective nature of tactile experiences, influenced by individual skin properties and linguistic interpretation. Moreover, DreamTexture is optimized for real-time inference on commercially available hardware, making haptic content creation more scalable and accessible. Our findings indicate that the diffusion process lends itself well to the efficient generation of one-dimensional friction signals and produces realistic signals, but it exhibits limitations in fully capturing the variability inherent in the input space.