Automated Personnel Activities Observation in the Catheterization Laboratory

More Info
expand_more

Abstract

This thesis presents a method for personnel activities observation, i.e., 3D human pose estimation and tracking, in a Catheterization Laboratory(Cath Lab). We mount five cameras from different angles in the Cath Lab, where surgeons and assistants are in similar clothes while doing surgery. Accurate 3D human pose estimation is the cornerstone of our method. Most previous 3D pose estimation methods train their models directly on a 3D pose dataset. However, these methods are not suitable for our task: 1) We do not have enough 3D pose data for training because of privacy issues and specificity 2) The model needs to be retrained in different operating rooms or the camera calibration changed. To solve these problems, we decompose the 3D human pose estimation task into two stages, avoiding the need for large amounts of 3D pose data and retraining. In the first stage, we apply YOLOX and HRNet for 2D human detection and 2D pose estimation. Simultaneously, the 2D object tracking network Bytetrack tracks person identities based on detection results. Then we use a matching algorithm to match the corresponding 2D poses from multiple views and reconstruct 3D poses. Given 3D poses and tracking identities, we at last introduce a hybrid method tracking algorithm. By feeding 2D tracking results into the matching and tracking algorithm, we increase the accuracy of the result in a scene where people are wearing similar clothes. We fine-tune and test our method with an operating room dataset. Finally, we validate the method on data from the Cath Lab.

Files

MSc_thesis_Yingfeng_v7.pdf
(pdf | 42.4 Mb)
Unknown license