With significant progress being made toward improving endoscope technology such as capsule endoscopy and robotic endoscopy, the development of advanced strategies for manipulating, controlling, and more generally, easing the accessibility of these devices for physicians is an
...
With significant progress being made toward improving endoscope technology such as capsule endoscopy and robotic endoscopy, the development of advanced strategies for manipulating, controlling, and more generally, easing the accessibility of these devices for physicians is an important next step. This article presents an autonomous navigation strategy for use in endoscopy, utilizing a state-dependent region estimation approach to allow for multimodal control design. This region estimator is evaluated for its accuracy in predicting yaw angle of the camera relative to the lumen center, and for estimating the location of the camera based on overall haustra morphology within the colon. To assess the utility of this region estimator, multimodal control is used to allow for autonomous navigation of the Endoculus, a robotic capsule endoscope, within a benchtop, to-scale, simulated colon. The estimation approach is presented and tested, demonstrating successful tracking of fixed velocity rotations at speeds up to 40^circ/s and allowing for curve anticipation approximately 10 cm before entering a curved section of the simulator. Finally, the multimodal control strategy utilizing this estimator is tested within the simulator over a variety of anatomic configurations. This strategy proves successful for navigation in both straight sections of this simulator and in tightly curved sections as small as 8 cm radius of curvature, with average velocities reaching 2.61 cm/s in straight sections and 0.99 cm/s in curved sections.
@en