Print Email Facebook Twitter Reinforcement Learning of Potential Fields to achieve Limit-Cycle Walking Title Reinforcement Learning of Potential Fields to achieve Limit-Cycle Walking Author Feirstein, D.S. Contributor Vallery, H. (mentor) Kober, J. (mentor) Faculty Mechanical, Maritime and Materials Engineering Department BioMechanical Engineering Programme Biorobotics Date 2016-04-04 Abstract Reinforcement learning is a powerful tool to derive controllers for systems where no models are available. Particularly policy search algorithms are suitable for complex systems, to keep learning time manageable and account for continuous state and action spaces. However, these algorithms demand more insight into the system to choose a suitable controller parameterization. This paper investigates a type of policy parameterization for impedance control that allows energy input to be implicitly bounded: Potential fields. In this work, a methodology for generating a potential field-constrained impedance control via approximation of example trajectories, and subsequently improving the control policy using Reinforcement Learning, is presented. The potential field-constrained approximation is used as a policy parameterization for policy search reinforcement learning and is compared to its unconstrained counterpart. Simulations on a simple biped walking model show the learned controllers are able to surpass the potential field of gravity by generating a stable limit-cycle gait on flat ground for both parameterizations. The potential field-constrained controller provides safety with a known energy bound while performing equally well as the unconstrained policy. Subject reinforcement learninglimit-cycle walking To reference this document use: uuid:1f33f282-fc8b-4393-9fa3-3abe7f471bb5 Embargo date 2021-03-24 Part of collection Student theses Document type master thesis Rights (c) 2016 Feirstein, D.S.