Decision-theoretic planning under uncertainty with information rewards for active cooperative perception

Journal Article (2015)
Author(s)

Matthijs Spaan (TU Delft - Algorithmics)

Tiago S. Veiga (Lisbon Technical University)

Pedro U. Lima (Lisbon Technical University)

Research Group
Algorithmics
DOI related publication
https://doi.org/10.1007/s10458-014-9279-8
More Info
expand_more
Publication Year
2015
Language
English
Research Group
Algorithmics
Issue number
6
Volume number
29
Pages (from-to)
1157-1185

Abstract

Partially observable Markov decision processes (POMDPs) provide a principled framework for modeling an agent’s decision-making problem when the agent needs to consider noisy state estimates. POMDP policies take into account an action’s influence on the environment as well as the potential information gain. This is a crucial feature for robotic agents which generally have to consider the effect of actions on sensing. However, building POMDP models which reward information gain directly is not straightforward, but is important in domains such as robot-assisted surveillance in which the value of information is hard to quantify. Common techniques for uncertainty reduction such as expected entropy minimization lead to non-standard POMDPs that are hard to solve. We present the POMDP with Information Rewards (POMDP-IR) modeling framework, which rewards an agent for reaching a certain level of belief regarding a state feature. By remaining in the standard POMDP setting we can exploit many known results as well as successful approximate algorithms. We demonstrate our ideas in a toy problem as well as in real robot-assisted surveillance, showcasing their use for active cooperative perception scenarios. Finally, our experiments show that the POMDP-IR framework compares favorably with a related approach on benchmark domains.

No files available

Metadata only record. There are no files for this record.