Multimodal markers for technology-independent integration of augmented reality devices and surgical navigation systems

Journal Article (2022)
Author(s)

Mohamed Benmahdjoub (Erasmus MC)

Wiro J. Niessen (TU Delft - ImPhys/Medical Imaging, TU Delft - ImPhys/Computational Imaging, Erasmus MC)

E. B. Wolvius (Erasmus MC)

Theo Van Walsum (Erasmus MC)

Research Group
ImPhys/Computational Imaging
Copyright
© 2022 Mohamed Benmahdjoub, W.J. Niessen, Eppo B. Wolvius, T. van Walsum
DOI related publication
https://doi.org/10.1007/s10055-022-00653-3
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Mohamed Benmahdjoub, W.J. Niessen, Eppo B. Wolvius, T. van Walsum
Research Group
ImPhys/Computational Imaging
Issue number
4
Volume number
26
Pages (from-to)
1637-1650
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Augmented reality (AR) permits the visualization of pre-operative data in the surgical field of view of the surgeon. This requires the alignment of the AR device’s coordinate system with the used navigation/tracking system. We propose a multimodal marker approach to align an AR device with a tracking system: in our implementation, an electromagnetic tracking system (EMTS). The solution makes use of a calibration method which determines the relationship between a 2D pattern detected by an RGB camera and an electromagnetic sensor of the EMTS. This allowed the projection of a 3D skull model on its physical counterpart. This projection was evaluated using a monocular camera and an optical see-through device (HoloLens 2) (https://www.microsoft.com/en-us/hololens/) achieving an accuracy of less than 2.5 mm in the image plane of the HoloLens 2 (HL2). Additionally, 10 volunteers participated in a user study consisting of an alignment task of a pointer with 25 projections viewed through the HL2. The participants achieved a mean error of 2.7 1.3 mm and 2.9 2.9 in positional and orientation error. This study showcases the feasibility of the approach, provides an evaluation of the alignment, and finally, discusses its advantages and limitations.

Files

License info not available