VolCam

Context-Aware Intuitive Touchless Interaction For Medical Volume Data

Master Thesis (2017)
Author(s)

R. Alashrafov (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

E Eisemann – Mentor

Anna Vilanova Bartroli – Mentor

Ioannis Katramados – Mentor

Faculty
Electrical Engineering, Mathematics and Computer Science
Copyright
© 2017 Rustam Alashrafov
More Info
expand_more
Publication Year
2017
Language
English
Copyright
© 2017 Rustam Alashrafov
Graduation Date
22-08-2017
Awarding Institution
Delft University of Technology
Project
3JECTOR
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Touchless interaction has recently gained considerable attention by researchers as well as industry. Different domains are interested in implementing this technology in their solutions. Medical visualization has a special interest in this technology due to the sterile conditions in operating rooms. Exploration and detailed inspection of the scanned objects are among the most common interactions performed by professionals. These operations become more challenging when combined with touchless input. Context-aware methods exist, which facilitate navigation, but these methods are made for meshes and not for volume renderings. Hence the research question: Can these methods be extended to volume renderings and how well will they perform with touchless interaction metaphors? Metaphor and underlying VolCam algorithm are presented in this work. The metaphor allows users to perform exploration and inspection tasks on medical volume data using touchless input device - LeapMotion. The VolCam - an extension of the ShellCam algorithm, automatically maps the user input to distinct camera movements based on the current scene view by sampling the visible part of the volume. Interactive frame rates are achieved by performing computations on GPU. No pre-processing or specialized data structures are required which makes the technique directly applicable to wide-range of volume datasets.

Files

RustamMScThesisVolCam.pdf
(pdf | 23.2 Mb)
License info not available