Learning a Latent Representation of the Opponent in Automated Negotiation

More Info
expand_more

Abstract

This paper introduces a strategy for learning opponent parameters in automated negotiation and using them for future negotiation sessions. The goal is to maximize the agent’s utility while being consistent in its performance over various negotiation scenarios. While a number of reinforcement learning approaches in the field have used Q-learning, this paper uses the newer Proximal Policy Optimization algorithm. Machine learning has been used in opponent modeling, classifying opponents, and learning strategies, but there have been few attempts to store and re-use this information. In an experimental setup, it is shown that this approach outperforms a baseline in terms of individual utility.