This study explores how different features impact a Reinforcement Learning agent's performance in forex trading. Using a Deep Q-Network (DQN) agent and EUR/USD data from 2022-2024, we found that performance is highly sensitive to the information provided. Key findings show that f
...
This study explores how different features impact a Reinforcement Learning agent's performance in forex trading. Using a Deep Q-Network (DQN) agent and EUR/USD data from 2022-2024, we found that performance is highly sensitive to the information provided. Key findings show that for feature types like momentum and volatility, a single indicator outperformed a combination of them, as the latter tended to introduce noise. Including information about the agent's own status, such as its current trade duration, was beneficial. Counter-intuitively, providing more historical data consistently worsened performance, leading to overfitting where the agent memorized training data rather than learning general strategies. The main conclusion is that creating an effective state representation is a trade-off; the complexity of the input data must match the learning algorithm's ability to process it without overfitting.