Depth based registration of 3D preoperative models to intraoperative patient anatomy using the HoloLens 2

Master Thesis (2023)
Author(s)

E. Kerkhof (TU Delft - Mechanical Engineering)

Contributor(s)

Theo van Walsum – Graduation committee member (Erasmus MC)

Tessa van Ginhoven – Graduation committee member (Erasmus MC)

Denise E. Hilling – Graduation committee member (Leiden University Medical Center)

Abdullah Thabit – Mentor (Erasmus MC)

Faculty
Mechanical Engineering
Copyright
© 2023 Enzo Kerkhof
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Enzo Kerkhof
Graduation Date
11-07-2023
Awarding Institution
Delft University of Technology, Universiteit Leiden, Erasmus Universiteit Rotterdam
Programme
['Technical Medicine | Imaging and Intervention']
Faculty
Mechanical Engineering
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Purpose: Image-guided surgery can decrease perioperative complication incidences. Augmented Reality (AR) with head-mounted displays (HMDs) provides an accessible method of visualizing 3D preoperative models intraoperatively. This could provide the surgeon with an easy to use intraoperative image-guided surgery system. Image-to-patient registration, the other key step of image-guided surgery, can still be intricate and time-consuming with traditional systems. This research explores the feasibility of using the depth sensors of the HoloLens 2, a state-of-the-art AR HMD, for depth-based image-to-patient registration. This research contributes to the advancement of less complex and more efficient image-guided surgical techniques.
Methods: To achieve these objectives, three experiments were conducted using a pilot system based on the HoloLens 2's depth sensors. The first experiment evaluated the accuracy of the depth sensors quantitatively. The second experiment compared four registration initialization methods, including manual and automated approaches. The accuracy and success rate of alignment were assessed using a multi-modal ground truth. Finally, a qualitative assessment of the pilot system was performed on various objects and materials. This experiment aimed to evaluate the system's performance and usability in real-world scenarios.
Results: The depth accuracy experiment showed that both the AHAT and LT sensors had mean overestimation errors of 5.7 and 9.0 mm, respectively. In the registration experiment, the two manual initialization methods consistently achieved successful registration (100%), while the two automatic methods had varying success rates (23.3% and 50%). Three out of four depth registration methods completed the registration within 5 seconds. The mean translation errors ranged from 12.6 to 14.7 mm, and rotation errors ranged from 1.5 to 1.8 degrees. The minimum observed translation and rotation errors were 6.9 mm and 0.5 degrees, respectively, while the maximum errors were 18.8 mm and 3.2 degrees, respectively.
Conclusion: The study's results suggest the potential for achieving sub-10 mm registration accuracy within 5 seconds with depth-based image-to-patient registration. This offers a fast and convenient alternative to other tracking systems that require invasive fiducial markers and time-consuming calibration steps. However, the current accuracy level of the system poses some limitations. Nonetheless, the developed system holds promise for a wide range of surgical procedures that currently do not utilize image guidance due to its complexity. By enabling faster and more accessible image guidance, depth-based registration has the potential to enhance surgical outcomes, such as improving tumor resection margins and avoidance of vulnerable tissues, making it highly beneficial for various procedures.

Files

License info not available