Statistics Theory
Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie
Publié le - SIAM Journal on Imaging Sciences
Since the seminal work of Venkatakrishnan et al. [82] in 2013, Plug & Play (PnP) methods have become ubiquitous in Bayesian imaging. These methods derive estimators for inverse problems in imaging by combining an explicit likelihood function with a prior that is implicitly defined by an image denoising algorithm. In the case of optimisation schemes, some recent works guarantee the convergence to a fixed point, albeit not necessarily a maximum- a-posteriori Bayesian estimate. In the case of Monte Carlo sampling schemes for general Bayesian computation, to the best of our knowledge there is no known proof of convergence. Algorithm convergence issues aside, there are important open questions regarding whether the underlying Bayesian models and estimators are well defined, well-posed, and have the basic regularity properties required to support efficient Bayesian computation schemes. This paper develops theory for Bayesian analysis and computation with PnP priors. We introduce PnP-ULA (Plug & Play Unadjusted Langevin Algorithm) for Monte Carlo sampling and minimum mean squared error estimation. Using recent results on the quantitative convergence of Markov chains, we establish detailed convergence guarantees for this algorithm under realistic assumptions on the denoising operators used, with special attention to denoisers based on deep neural networks. We also show that these algorithms approximately target a decision-theoretically optimal Bayesian model that is well-posed and meaningful from a frequentist viewpoint. PnP-ULA is demonstrated on several canonical problems such as image deblurring and inpainting, where it is used for point estimation as well as for uncertainty visualisation and quantification.