Online robot guidance and navigation in non-stationary environment with hybrid Hierarchical Reinforcement Learning

Journal Article (2022)
Author(s)

Y Zhou (Universiti Sains Malaysia, TU Delft - Control & Simulation)

H.W. Ho (TU Delft - Control & Simulation, Universiti Sains Malaysia)

Research Group
Control & Simulation
Copyright
© 2022 Y. Zhou, H.W. Ho
DOI related publication
https://doi.org/10.1016/j.engappai.2022.105152
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Y. Zhou, H.W. Ho
Research Group
Control & Simulation
Volume number
114
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Hierarchical Reinforcement Learning (HRL) provides an option to solve complex guidance and navigation problems with high-dimensional spaces, multiple objectives, and a large number of states and actions. The current HRL methods often use the same or similar reinforcement learning methods within one application so that multiple objectives can be easily combined. Since there is not a single learning method that can benefit all targets, hybrid Hierarchical Reinforcement Learning (hHRL) was proposed to use various methods to optimize the learning with different types of information and objectives in one application. The previous hHRL method, however, requires manual task-specific designs, which involves engineers’ preferences and may impede its transfer learning ability. This paper, therefore, proposes a systematic online guidance and navigation method under the framework of hHRL, which generalizes training samples with a function approximator, decomposes the state space automatically, and thus does not require task-specific designs. The simulation results indicate that the proposed method is superior to the previous hHRL method, which requires manual decomposition, in terms of the convergence rate and the learnt policy. It is also shown that this method is generally applicable to non-stationary environments changing over episodes and over time without the loss of efficiency even with noisy state information.

Files

1_s2.0_S0952197622002676_main.... (pdf)
(pdf | 1.31 Mb)
- Embargo expired in 01-07-2023
License info not available