Print Email Facebook Twitter Potential Field Methods for Safe Reinforcement Learning Title Potential Field Methods for Safe Reinforcement Learning: Exploring Q-Learning and Potential Fields Author Bhowal, Abhranil (TU Delft Aerospace Engineering; TU Delft Control & Operations) Contributor van Kampen, E. (mentor) Mannucci, T. (mentor) Degree granting institution Delft University of Technology Programme Aerospace Engineering | Control & Simulation Date 2017-08-11 Abstract A Reinforcement Learning (RL) agent learns about its environment through exploration. For most physical applications such as search and rescue UAVs, this exploration must take place with safety in mind. Unregulated exploration, especially at the beginning of a run, will lead to fatal situations such as crashes. One approach to mitigating these risks is by using Artificial Potential Fields (APFs). Various approaches to effectively use the potential information gathered by the agent are proposed, tested and discussed. The agent is placed in an environment-model-free setting, where it is still provided with knowledge of its own dynamics. A gridworld simulation is developed using MATLAB to test the interoperability of APFs with Q-learning. It is shown that safety of exploration benefits from adding this layer of information to the agents’ decision making process. In effect, the Q-table gets updated more efficiently due to the agent explicitly knowing of high potential ‘dangerous’ states. Subject Reinforcement LearningRLPotential FieldsArtificial Potential FieldsdroneUAVAIPath PlanningQ-Learning To reference this document use: http://resolver.tudelft.nl/uuid:767537cb-c54f-4496-b812-55d11d150f98 Part of collection Student theses Document type master thesis Rights © 2017 Abhranil Bhowal Files PDF Thesis_AB_July2017_v4.pdf 5.79 MB Close viewer /islandora/object/uuid:767537cb-c54f-4496-b812-55d11d150f98/datastream/OBJ/view