Language Assistance in Reinforcement Learning in Dynamic Environments

More Info
expand_more

Abstract

Language is an intuitive and effective way for humans to communicate. Large Language Models (LLMs) can interpret and respond well to language. However, their use in deep reinforcement learning is limited as they are sample inefficient. State-of-the-art deep reinforcement learning algorithms are more sample efficient but cannot understand language well. This research aims to study whether RL agents can improve learning by utilizing language assistance and how LLMs can help them. A sentence describing the agent's environment is fed into an LLM to create a semantic embedding, which is consumed by a recurrent Soft Actor-Critic (SAC) agent to create an agent that can listen to natural language. This research shows that the best method for the agent to consume the embedding is concatenating it to each observation. Also, LLM-based embeddings lead to faster and more stable learning than non-LLM-based embeddings. The agent is sensitive to noise in the embedding but not to the embedding's dimensionality. The agent can generalize well across sentences that have a similar meaning to sentences seen during training but are formulated differently, but it can not generalize as well across sentences with unknown subjects and needs the subjects of the sentences to be grounded in training. Lastly, this research shows that the proposed architecture supports scaling language assistance to more complex environments.

Files