Fault Tolerant Control for Autonomous Surface Vehicles via Model Reference Reinforcement Learning

More Info
expand_more

Abstract

A novel fault tolerant control algorithm is proposed in this paper based on model reference reinforcement learning for autonomous surface vehicles subject to sensor faults and model uncertainties. The proposed control scheme is a combination of a model-based control approach and a data-driven method, so it can leverage the advantages of both sides. The proposed design contains a baseline controller that ensures stable tracking performance at healthy conditions, a fault observer that estimates sensor faults, and a reinforcement learning module that learns to accommodate sensor faults using fault estimation and compensate for model uncertainties. The impact of sensor faults and model uncertainties can be effectively mitigated by this composite design. Stable tracking performance can also be ensured even at both the offline training and online implementation stages for the learning-based fault tolerant control. A numerical simulation with gyro sensor faults is presented to demonstrate the efficiency of the proposed algorithm.