Reinforcement Learning for Orientation Estimation Using Inertial Sensors with Performance Guarantee

Conference Paper (2021)
Author(s)

Liang Hu (University of Essex)

Y. Tang (TU Delft - Robot Dynamics)

Zhipeng Zhou (TU Delft - Robot Dynamics)

Wei Pan (TU Delft - Robot Dynamics)

Research Group
Robot Dynamics
Copyright
© 2021 Liang Hu, Y. Tang, Z. Zhou, W. Pan
DOI related publication
https://doi.org/10.1109/ICRA48506.2021.9561440
More Info
expand_more
Publication Year
2021
Language
English
Copyright
© 2021 Liang Hu, Y. Tang, Z. Zhou, W. Pan
Research Group
Robot Dynamics
Pages (from-to)
10243-10249
ISBN (print)
978-1-7281-9078-5
ISBN (electronic)
978-1-7281-9077-8
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

This paper presents a deep reinforcement learning (DRL) algorithm for orientation estimation using inertial sensors combined with a magnetometer. Lyapunov’s method in control theory is employed to prove the convergence of orientation estimation errors. The estimator gains and a Lyapunov function are parametrised by deep neural networks and learned from samples based on the theoretical results. The DRL estimator is compared with three well-known orientation estimation methods on both numerical simulations and real dataset collected from commercially available sensors. The results show that the proposed algorithm is superior for arbitrary estimation initialisation and can adapt to a drastic angular velocity profile for which other algorithms can be hardly applicable. To the best of our knowledge, this is the first DRL-based orientation estimation method with an estimation error boundedness guarantee.

Files

License info not available