Repository hosted by TU Delft Library

Home · Contact · About · Disclaimer ·
 

Pattern recognition in hyperspectral data acquired during surgical procedures: differentiation between nerve and adipose tissue

Publication files not online:

Author: Schols, R.M. · Laan, M. ter · Stassen, L.P.S. · Bouvy, N.D. · Wieringa, F.P. · Alic, L.
Type:article
Date:2016
Source:Proceedings MLDAS, Third Machine Learning and Data Analytics Symposium, 14-15 March 2016, Doha, Qatar
Identifier: 534791
Keywords: Electronics · Diffuse reflectance spectroscopy · Tissue spectral analysis · Nerve classification · Recurrent laryngeal nerve · Median nerve · Adipose tissue · Biomedical Innovation · Healthy Living · Nano Technology · OPT - Optics · TS - Technical Sciences

Abstract

Intraoperative nerve localization is extremely important during surgery, especially laparoscopy. This is particularly challenging when nerves show visual resemblance to surrounding tissue. An example of such a delicate procedure is thyroid and parathyroid surgery, where iatrogenic injury of the recurrent laryngeal nerve can result in transient or permanent vocal problems. A camera system, enabling nerve-specific image enhancement, would be useful in preventing such complications. Hyperspectral camera technology has a potential to provide a nerve-specific image enhancement. As a first step towards such a dedicated camera system, we evaluated the availability of useful spectral tissue signatures by diffuse reflectance spectroscopy using silicon (Si) and indium gallium arsenide (InGaAs) sensors. The spectral signatures from the combined Si & InGaAs bandwidth ranges 350–1,830 nm (1 nm spectral resolution) were used to develop a classifier. To build the classifier, 36 heuristic features were extracted from spectral signatures collected during carpal tunnel release (CTR) surgery as well as thyroid and parathyroid (T&P) surgery. As the larger median nerve (exposed during T&P surgery) provided a lower probability to partial volume effect, this data (15 tissue spots) was used to train the classifier. For validation purposes, 40 tissue spots acquired during CTR surgery were used. The differentiation between nerve tissue and the visually quite similar adipose tissue yielded good results. When using one feature, we reached the accuracy of 93.3% in training set and the accuracy of 85% in the independent validation set. When using two features, we reached accuracy of 100% in training set (26 pairs of features) and the maximum accuracy of 92.5% (11 pairs of features) in the independent validation set. For three features, we reached the accuracy of 100% in training set (410 triplets of features), with the accuracy of 100% in the independent validation set (37 triplets of features).