Print Email Facebook Twitter Natural User Interface in Augmented Reality to Control Spot Title Natural User Interface in Augmented Reality to Control Spot: A Large Scale User Study on Speech and Gesture Control of Robots With The Microsoft HoloLens Author van der Linden, Jesse (TU Delft Mechanical, Maritime and Materials Engineering) Contributor Eisma, Y.B. (mentor) Degree granting institution Delft University of Technology Programme Mechanical Engineering | Vehicle Engineering | Cognitive Robotics Date 2024-02-26 Abstract The increasing presence of robots calls for a more seamless and information-rich communication method between humans and robots. This paper explores how natural user interface (NUI) modalities, particularly speech and gesture controls, can be used through augmented reality (AR) to operate robots. The increasing presence of robots calls for proper evaluation methods of how to use AR for operating mobile robots. The study uses the Microsoft HoloLens and the robot, named Spot, from Boston Dynamics as primary technologies. The research consists of a user study consisting of 218 participants, one of the largest participant pools for this field to date. The experiment consists of walking the robot over a trajectory with discrete steps, with the perspective of following the robot or standing on a predetermined stationary point. To support the control of the robot, visual information and feedback are included in the HoloLens. Speech control showed the best time performance of the experiment, regardless of the perspective condition. Conversely, errors made during the trials were the majority for the speech condition, due to the waiting time of the speech recognition that caused participants to repeat the commands. The walking condition gave participants the impression that control commands were more intuitively mapped to the robot's motion. Overall, the participants preferred the speech control method while walking with the robot, and the least preferred method was using gestures in a stationary perspective. Even though the speech was the preferred control method and perspective-taking was preferred by participants, this was only for the experiment and task discussed in this paper. Both control methods have different characteristics that make them favorable to be used for specific tasks. Speech and gestures can be used for different tasks when operating a robot with Augmented Reality glasses; preference will depend on the task at hand and the control method design. Subject HoloLens 2RoboticsAugmenetd RealityHuman-Robot Interactionspeech recognitionGesture RecognitionTeleoperationNatural User Interface To reference this document use: http://resolver.tudelft.nl/uuid:36c2f0bd-f1b6-4f72-aabe-211518115aed Part of collection Student theses Document type master thesis Rights © 2024 Jesse van der Linden Files PDF J_vd_Linden_TU_Delft_Thes ... botics.pdf 4.04 MB Close viewer /islandora/object/uuid:36c2f0bd-f1b6-4f72-aabe-211518115aed/datastream/OBJ/view