Fictional Co-Play for Human-Agent Collaboration

Evaluating state-of-the-art reinforcement learning technique for adaptability to human collaborators

More Info
expand_more

Abstract

A longstanding problem in the area of reinforcement learning is human-agent col- laboration. As past research indicates that RL agents undergo a distributional shift when they start collaborating with human beings, the goal is to create agents that can adapt. We build upon research using the two-player Overcooked environment to repro- duce a simplified version of the Fictitious Co-Play algorithm in order to confirm past found improvements at a smaller scale of training and using Self-Play and Population- based trained algorithms as the baselines for comparison. We find that the agent on average slightly outperforms both baseline algorithms when evaluated using a human proxy. We also find high cross-seed variance in performance, indicating the potential for further hyperparameter tuning.

Files

Research_paper_nathan.pdf
(pdf | 0.48 Mb)
License info not available