Model-Reference Reinforcement Learning Control of Autonomous Surface Vehicles
Qingrui Zhang (TU Delft - Transport Engineering and Logistics)
Wei Pan (TU Delft - Robot Dynamics)
Vasso Reppa (TU Delft - Transport Engineering and Logistics)
More Info
expand_more
Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.
Abstract
This paper presents a novel model-reference reinforcement learning control method for uncertain autonomous surface vehicles. The proposed control combines a conventional model-based control method with deep reinforcement learning. With the conventional model-based control, we can ensure the learning-based control law provides closed-loop stability for the trajectory tracking control of the overall system, and increase the sample efficiency of the deep reinforcement learning. With reinforcement learning, we can directly learn a control law to compensate for modeling uncertainties. In the proposed control, a nominal system is employed for the design of a baseline control law using a conventional control approach. The nominal system also defines the desired performance for uncertain autonomous vehicles to follow. In comparison with traditional deep reinforcement learning methods, our proposed learning-based control can provide stability guarantees and better sample efficiency. We demonstrate the performance of the new algorithm via extensive simulation results.