The density, diversity, connectedness and scale of urban environments make military operations challenging. This paper shows that different artificial intelligence techniques can be combined to provide the commander with various form of intelligence augmentation and to support the decision making process. A warfare model has been developed where an AI system, representing a red unit, learns how to select the position for a target and for several improvised explosive devices (IEDs) in order to prevent the blue unit to locate the target. The blue unit is trained to reach the target by using deep reinforcement learning, while an evolutionary algorithm is used to train the red unit. These techniques do not rely on large amounts of historical data. Different approaches have been used and discussed to optimise the co-learning of the two agents, showing that optimal behaviour can be learned in an urban environment. Information about the most likely positions of the target and the IEDs can be extracted from the policy learned by the system, and used by the commander to provide intelligence augmentation while planning an operation and evaluating different possible courses of action. The reliability of this information depends on the realism of the AI system simulating the red unit, that is strictly dependent on the model used for the blue unit during the training.