This thesis explores Markov chain Monte Carlo (MCMC) methods for a discrete linear Bayesian inverse problem with non-Gaussian priors. The non-Gaussian priors are total variation and Besov space priors, which are called edge-preserving due to their ability to model sparse features
...
This thesis explores Markov chain Monte Carlo (MCMC) methods for a discrete linear Bayesian inverse problem with non-Gaussian priors. The non-Gaussian priors are total variation and Besov space priors, which are called edge-preserving due to their ability to model sparse features and discontinuities. Three sampling algorithms are compared: random walk Metropolis-Hastings, preconditioned Crank–Nicholson (pCN), and randomise-then-optimise (RTO). Prior transformations are developed to adapt RW, pCN and RTO for edge-preserving priors. Results show a trade-off between computational efficiency and accuracy. RTO with prior transformations yields more accurate reconstructions and credible intervals, but at significant computational cost and with sensitivity to prior choice. pCN is faster, more robust to discretisation, and provides more control over the sampling process but produces highly correlated samples and less accurate estimates.