Stochastic Control with Complete Observations on a Finite Horizon

Book Chapter (2021)
Author(s)

Jan H. van Schuppen (TU Delft - Mathematical Physics)

Research Group
Mathematical Physics
DOI related publication
https://doi.org/10.1007/978-3-030-66952-2_12
More Info
expand_more
Publication Year
2021
Language
English
Research Group
Mathematical Physics
Pages (from-to)
435-491
Publisher
Springer
ISBN (electronic)
978-3-030-66952-2

Abstract

Optimal stochastic control problems are formulated for a stochastic control system with complete observations on a finite horizon. Dynamic programming yields necessary and sufficient conditions for optimality rather than local optimality conditions as provided by methods based on the calculus of variations or on the maximum principle. Sufficient conditions are formulated for a subset of value functions to be invariant with respect to the dynamic programming operator. Reduction in complexity of a stochastic control system with the controlled output signal is proven using dynamic programming. Examples include: the linear–quadratic–Gaussian optimal control problem, a gambling problem with an exponential value function, and a finite stochastic control system.

No files available

Metadata only record. There are no files for this record.