Deep learning for surgical phase recognition using endoscopic videos
Annetje C.P. Guédon (Spaarne Gasthuis, Hoofddorp)
S.E.P. Meij (TU Delft - Medical Instruments & Bio-Inspired Technology)
Karim N.M.M.H. Osman (Student TU Delft)
Helena A. Kloosterman (Cosmonio)
Karlijn J. van Stralen (Spaarne Gasthuis, Hoofddorp)
Matthijs C.M. Grimbergen (Amsterdam UMC)
Quirijn A.J. Eijsbouts (Spaarne Gasthuis, Hoofddorp)
J.J. van Den Dobbelsteen (TU Delft - Medical Instruments & Bio-Inspired Technology)
Andru P. Twinanda (Cosmonio)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
perating room planning is a complex task as pre-operative estimations of
procedure duration have a limited accuracy. This is due to large
variations in the course of procedures. Therefore, information about the
progress of procedures is essential to adapt the daily operating room
schedule accordingly. This information should ideally be objective,
automatically retrievable and in real-time. Recordings made during
endoscopic surgeries are a potential source of progress information. A
trained observer is able to recognize the ongoing surgical phase from
watching these videos. The introduction of deep learning techniques
brought up opportunities to automatically retrieve information from
surgical videos. The aim of this study was to apply state-of-the art
deep learning techniques on a new set of endoscopic videos to
automatically recognize the progress of a procedure, and to assess the
feasibility of the approach in terms of performance, scalability and
practical considerations.