On the road from Model-Based Dynamical Programming to Model-Free Reinforcement Learning

A sample efficient approach

More Info
expand_more

Abstract

This thesis introduces a new method, called Mixed Iteration, for controlling Markov Decision Processes when partial information is known about the dynamics of the Markov Decision Process. The algorithm uses sampling to calculate the expectation of partially known dynamics in stochastic environments. Its goal is to lower the number of iterations and computational steps required for convergence compared to traditional model-free algorithms. By lowering the number of samples required to achieve convergence Markov Decision Processes can be controlled and trained more efficiently. Additionally, the thesis discusses how this algorithm can enhance the sample efficiency and convergence rate of Reinforcement Learning algorithms like Q-Learning. The effectiveness of the proposed method will be evaluated in standard Reinforcement Learning problems and compared with the performance of Q-learning. The results show that under certain conditions that will be discussed in the thesis, the new proposed algorithm outperforms classical algorithms in terms of sample efficiency. The study will provide insight into the field of previous partial information in Reinforcement Learning alternatives, as well as the challenges that researchers in this field continue to face.