Adaptive Dynamic Programming for Flight Control

More Info
expand_more

Abstract

Adaptive dynamic programming (ADP) is a sub-field of approximate dynamic programming that deals with the adaptive control of continuous nonlinear dynamic systems. Its origins stem from dynamic programming in optimal control, but it is extended into a form where approximations are used to reduce the curse of dimensionality and reduce the need for model knowledge. ADP is also considered to be one of the main reinforcement learning (RL) approaches since it uses information obtained from interaction with the environment to improve its policy. RL in general and ADP in particular are well suited for application to autonomous aerospace systems, since they allow adaptive control in case of uncertainties or faults in the system, even if the fault is of a type that is not anticipated during the control design. This chapter first gives a brief historical overview of ADP applications to flight control tasks. After that, four recent advances of ADP for flight control are presented.