Risk Aversion and Guided Exploration in Safety-Constrained Reinforcement Learning

Doctoral Thesis (2023)
Author(s)

Qisong Yang (TU Delft - Algorithmics)

Contributor(s)

MTJ Spaan – Promotor (TU Delft - Algorithmics)

Simon Tindemans – Copromotor (TU Delft - Intelligent Electrical Power Grids)

Research Group
Algorithmics
Copyright
© 2023 Q. Yang
More Info
expand_more
Publication Year
2023
Language
English
Copyright
© 2023 Q. Yang
Research Group
Algorithmics
ISBN (electronic)
978-94-6384-458-1
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

In traditional reinforcement learning (RL) problems, agents can explore environments to learn optimal policies through trials and errors that are sometimes unsafe. However, unsafe interactions with environments are unacceptable in many safety-critical problems, for instance in robot navigation tasks. Even though RL agents can be trained in simulators, there are many real-world problems without simulators of sufficient fidelity. Constructing safe exploration algorithms for dangerous environments is challenging because we have to optimize policies under the premise of safety. In general, safety is still an open problem that hinders the wider application of RL.

Files

License info not available
PhD_propositions_qisong.pdf
(pdf | 0.109 Mb)
Unspecified