Bayesian Linear Inverse Problems in Regularity Scales with Discrete Observations

More Info
expand_more

Abstract

We obtain rates of contraction of posterior distributions in inverse problems with discrete observations. In a general setting of smoothness scales we derive abstract results for general priors, with contraction rates determined by discrete Galerkin approximation. The rate depends on the amount of prior concentration near the true function and the prior mass of functions with inferior Galerkin approximation. We apply the general result to non-conjugate series priors, showing that these priors give near optimal and adaptive recovery in some generality, Gaussian priors, and mixtures of Gaussian priors, where the latter are also shown to be near optimal and adaptive.