Neural inverse procedural modeling of knitting yarns from images

Journal Article (2024)
Author(s)

Elena Trunz (Universität Bonn)

Jonathan Klein (King Abdullah University of Science and Technology, Universität Bonn)

Jan Müller (Universität Bonn)

Lukas Bode (Universität Bonn)

Ralf Sarlette (Universität Bonn)

M. Weinmann (TU Delft - Computer Graphics and Visualisation)

Reinhard Klein (Universität Bonn)

Research Group
Computer Graphics and Visualisation
Copyright
© 2024 Elena Trunz, Jonathan Klein, Jan Müller, Lukas Bode, Ralf Sarlette, M. Weinmann, Reinhard Klein
DOI related publication
https://doi.org/10.1016/j.cag.2023.12.013
More Info
expand_more
Publication Year
2024
Language
English
Copyright
© 2024 Elena Trunz, Jonathan Klein, Jan Müller, Lukas Bode, Ralf Sarlette, M. Weinmann, Reinhard Klein
Research Group
Computer Graphics and Visualisation
Volume number
118
Pages (from-to)
161-172
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

We investigate the capabilities of neural inverse procedural modeling to infer high-quality procedural yarn models with fiber-level details from single images of depicted yarn samples. While directly inferring all parameters of the underlying yarn model based on a single neural network may seem an intuitive choice, we show that the complexity of yarn structures in terms of twisting and migration characteristics of the involved fibers can be better encountered in terms of ensembles of networks that focus on individual characteristics. We analyze the effect of different loss functions including a parameter loss to penalize the deviation of inferred parameters to ground truth annotations, a reconstruction loss to enforce similar statistics of the image generated for the estimated parameters in comparison to training images as well as an additional regularization term to explicitly penalize deviations between latent codes of synthetic images and the average latent code of real images in the encoder's latent space. We demonstrate that the combination of a carefully designed parametric, procedural yarn model with respective network ensembles as well as loss functions even allows robust parameter inference when solely trained on synthetic data. Since our approach relies on the availability of a yarn database with parameter annotations and we are not aware of such a respectively available dataset, we additionally provide, to the best of our knowledge, the first dataset of yarn images with annotations regarding the respective yarn parameters. For this purpose, we use a novel yarn generator that improves the realism of the produced results over previous approaches.