Bounding the probability of resource constraint violations in multi-agent MDPs

Conference Paper (2017)
Author(s)

F. Nijs (TU Delft - Algorithmics)

Erwin Walraven (TU Delft - Algorithmics)

MM de Weerdt (TU Delft - Algorithmics)

Matthijs T.J. Spaan (TU Delft - Algorithmics)

Research Group
Algorithmics
Copyright
© 2017 F. de Nijs, E.M.P. Walraven, M.M. de Weerdt, M.T.J. Spaan
More Info
expand_more
Publication Year
2017
Language
English
Copyright
© 2017 F. de Nijs, E.M.P. Walraven, M.M. de Weerdt, M.T.J. Spaan
Research Group
Algorithmics
Pages (from-to)
3562-3568
ISBN (print)
978-1577357803
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

Multi-agent planning problems with constraints on global resource consumption occur in several domains. Existing algorithms for solving Multi-agent Markov Decision Processes can compute policies that meet a resource constraint in expectation, but these policies provide no guarantees on the probability that a resource constraint violation will occur. We derive a method to bound constraint violation probabilities using Hoeffding's inequality. This method is applied to two existing approaches for computing policies satisfying constraints: the Constrained MDP framework and a Column Generation approach. We also introduce an algorithm to adaptively relax the bound up to a given maximum violation tolerance. Experiments on a hard toy problem show that the resulting policies outperform static optimal resource allocations to an arbitrary level. By testing the algorithms on more realistic planning domains from the literature, we demonstrate that the adaptive bound is able to efficiently trade off violation probability with expected value, outperforming state-of-the-art planners.

Files

License info not available