A New Baseline for Feature Description on Multimodal Scans of Paintings

Bachelor Thesis (2022)
Author(s)

J. van der Toorn (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

R.T. Wiersma – Mentor (TU Delft - Computer Graphics and Visualisation)

R. Marroquim – Mentor (TU Delft - Computer Graphics and Visualisation)

E. Eisemann – Mentor (TU Delft - Computer Graphics and Visualisation)

C. Lofi – Graduation committee member (TU Delft - Web Information Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2022 Jules van der Toorn
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Jules van der Toorn
Graduation Date
23-06-2022
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Multimodal imaging is used by conservators and scientists to study the composition of paintings. To aid the combined analysis of these scans, such images must first be aligned. Rather than proposing a new domain-specific descriptor, we explore and evaluate how existing feature descriptors from related fields can improve the performance of feature-based painting scan registration. We benchmark these descriptors on pixel-precise, manually aligned scans of “Girl with a Pearl Earring” by Johannes Vermeer (c. 1665, Mauritshuis) and of “18th Century Portrait of a Woman”. As a baseline we compare against the well-established classical SIFT descriptor. We consider two recent descriptors: the handcrafted multimodal MFD descriptor, and the learned unimodal SuperPoint descriptor. Experiments show that SuperPoint starkly increases description matching accuracy by 40% for modalities with little modality-specific artefacts. Further, performing craquelure segmentation and using the MFD descriptor results in significant description matching accuracy improvements for modalities with many modality-specific artefacts.

Files

License info not available