Score-based diffusion models have significantly advanced the generation of high-dimensional data across diverse domains by learning a score model from datasets. From a Bayesian perspective, these models provide a natural representation of data priors and shall also facilitate sampling from related distributions, such as posterior distributions in inverse problems or tilted distributions shaped by additional criteria.
While many heuristic methods exist for such adaptations, they often lack the quantitative guarantees needed in scientific applications. This talk introduces recently developed techniques, grounded in the analysis of corresponding SDEs and PDEs, that allow principled modifications of the initial distribution or drift to achieve such adaptations. By leveraging the rich information encoded in pretrained score models, the resulting algorithms can substantially enhance classical sampling methods such as Langevin Monte Carlo or Sequential Monte Carlo.
