Approximating multivariate posterior distribution functions from Monte Carlo samples for sequential Bayesian inference

Journal Article (2020)
Author(s)

Bram Thijssen (Oncode Institute, Nederlands Kanker Instituut - Antoni van Leeuwenhoek ziekenhuis)

Lodewyk Wessels (TU Delft - Pattern Recognition and Bioinformatics, Oncode Institute, Nederlands Kanker Instituut - Antoni van Leeuwenhoek ziekenhuis)

Research Group
Pattern Recognition and Bioinformatics
Copyright
© 2020 B. Thijssen, L.F.A. Wessels
DOI related publication
https://doi.org/10.1371/journal.pone.0230101
More Info
expand_more
Publication Year
2020
Language
English
Copyright
© 2020 B. Thijssen, L.F.A. Wessels
Research Group
Pattern Recognition and Bioinformatics
Issue number
3
Volume number
15
Pages (from-to)
1-25
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

An important feature of Bayesian statistics is the opportunity to do sequential inference: The posterior distribution obtained after seeing a dataset can be used as prior for a second inference. However, when Monte Carlo sampling methods are used for inference, we only have a set of samples from the posterior distribution. To do sequential inference, we then either have to evaluate the second posterior at only these locations and reweight the samples accordingly, or we can estimate a functional description of the posterior probability distribution from the samples and use that as prior for the second inference. Here, we investigated to what extent we can obtain an accurate joint posterior from two datasets if the inference is done sequentially rather than jointly, under the condition that each inference step is done using Monte Carlo sampling. To test this, we evaluated the accuracy of kernel density estimates, Gaussian mixtures, mixtures of factor analyzers, vine copulas and Gaussian processes in approximating posterior distributions, and then tested whether these approximations can be used in sequential inference. In low dimensionality, Gaussian processes are more accurate, whereas in higher dimensionality Gaussian mixtures, mixtures of factor analyzers or vine copulas perform better. In our test cases of sequential inference, using posterior approximations gives more accurate results than direct sample reweighting, but joint inference is still preferable over sequential inference whenever possible. Since the performance is case-specific, we provide an R package mvdens with a unified interface for the density approximation methods.