The use of simulated environments in the military domain has increased significantly over the past years, and with it the demand for more realistically behaving autonomous systems. The autonomous systems become ever more complex and need to support advanced human-machine interactions when applied for training and instruction (virtual role players) or concept development and experimentation (e.g. adjustable autonomy of unmanned vehicles). In order to remain flexible, cost-effective and maintainable, a re-evaluation is needed of the way in which these autonomous systems are integrated in the simulated environment. In this paper we describe our approach of separating behavior components (‘Artificial Intelligence’ (AI) or ‘Brains’) from the simulation engine. To make this decoupling as efficient as possible, both the simulation engine and the AI need to provide a suitable interface. To enable maintainability and reusability the interface should support legacy simulation components and enable iterative development. We therefore developed the idea of using a double decoupling. In this approach the interface has a part that can be reused in other systems and a part that is specific for every system. The selected data exchange mechanism between the simulator and the AI is the High Level Architecture (HLA IEEE1516). HLA is the widely used standard for coupling distributed simulators. The modular Federation Object Model (FOM) feature introduced by the latest HLA version, IEEE1516-2010, supports the idea of decoupling reusable and specific modules very well, and allows gradual development of extension modules and (scenario) specific modules. The feasibility of our approach was demonstrated by two experiments, in which several commercial of the shelf (COTS) and proprietary tools and simulation components were integrated, in different formation, through HLA. The paper will present the design and development of the ‘Pluggable Brains’ approach and discuss the initial results of the two experiments.