· · · Institutional Repository

Home · About · Disclaimer · Terms of use ·

Advanced Path Planning for a Neurosurgical Flexible Catheter: Improving the performance of sampling-based motion planning


These file attachments have been under embargo and were made available to the public after the embargo was lifted on 4 October 2012.

Author: Falatehan, K.
Mentor: Langendoen, K.G. · Rodriguez y Baena, F.
Faculty:Electrical Engineering, Mathematics and Computer Science
Department:Embedded Software
Programme:Embedded Systems
Type:Master thesis
Embargo lifted:2012-10-04
Keywords: STING · Catheter · Neurosurgery · Imperial College London · Mechatronics in Medicine Laboratory · Path Planning
Rights: (c) 2012 Falatehan, K.


At Mechatronics in Medicine (MiM) Laboratory of Imperial College London, a neurosurgical steerable flexible probe ( STING) that is used to access deep brain lesions through curved trajectories is currently being developed. The focus of my research project is mainly on trajectory planning of the flexible probe i.e. investigation on how to increase efficiency and performance of the trajectory planning. Some experiments have been thoroughly done to measure the performance of a well known sampling based path planning method, Reachability-Guided Rapidly-exploring Random Tree (RG-RRT).

The first step to improve the performance was to migrate from MATLAB to Python-C++ which yielded 12-13 times performance speedup. Besides taking a close look at the software implementation details, the second step was to improve the algorithm by implementing a waypoint cache and exploiting some parallelization techniques. The parallelization techniques cover multi-core CPU (OR parallel, AND parallel, OR+AND parallel and Manager-Worker) and GPGPU techniques.

At the end of my research project, RG-RRT with waypoint cache was experimentally able to reach 4 times performance speedup, while parallelization on multi-core CPU with AND parallel technique has shown the most significant result by obtaining approximately 5 times performance speedup. The other parallelization, which was done through the use of an NVIDIA CUDA-enabled GPU, has successfully obtained 10 times performance speedup. Despite its higher rate of performance speedup, later it was shown that GPGPU technique suffers the most from inefficiency due to I/O bottleneck that is caused by device-host memory transfer.

Content Viewer