Community energy storage operation via reinforcement learning with eligibility traces

Journal Article (2022)
Author(s)

E. Mauricio Salazar (Eindhoven University of Technology)

Juan S. Giraldo (University of Twente)

Pedro Vergara Barrios (TU Delft - Intelligent Electrical Power Grids)

Phuong Nguyen (Eindhoven University of Technology)

Anne van der Molen (Eindhoven University of Technology)

Han Slootweg (Eindhoven University of Technology)

Research Group
Intelligent Electrical Power Grids
Copyright
© 2022 Edgar Mauricio Salazar Duque, Juan S. Giraldo, P.P. Vergara Barrios, Phuong Nguyen, Anne van der Molen, Han Slootweg
DOI related publication
https://doi.org/10.1016/j.epsr.2022.108515
More Info
expand_more
Publication Year
2022
Language
English
Copyright
© 2022 Edgar Mauricio Salazar Duque, Juan S. Giraldo, P.P. Vergara Barrios, Phuong Nguyen, Anne van der Molen, Han Slootweg
Research Group
Intelligent Electrical Power Grids
Volume number
212
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

The operation of a community energy storage system (CESS) is challenging due to the volatility of photovoltaic distributed generation, electricity consumption, and energy prices. Selecting the optimal CESS setpoints during the day is a sequential decision problem under uncertainty, which can be solved using dynamic learning methods. This paper proposes a reinforcement learning (RL) technique based on temporal difference learning with eligibility traces (ET). It aims to minimize the day-ahead energy costs while maintaining the technical limits at the grid coupling point. The performance of the RL is compared against an oracle based on a deterministic mixed-integer second-order constraint program (MISOCP). The use of ET boosts the RL agent learning rate for the CESS operation problem. The ET effectively assigns credit to the action sequences that bring the CESS to a high state of charge before the peak prices, reducing the training time. The case study shows that the proposed method learns to operate the CESS effectively and ten times faster than common RL algorithms applied to energy systems such as Tabular Q-learning and Fitted-Q. Also, the RL agent operates the CESS 94% near the optimal, reducing the energy costs for the end-user up to 12%.