z-logo
open-access-imgOpen Access
Monte Carlo sampling of solutions to inverse problems
Author(s) -
Mosegaard Klaus,
Tarantola Albert
Publication year - 1995
Publication title -
journal of geophysical research: solid earth
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.67
H-Index - 298
eISSN - 2156-2202
pISSN - 0148-0227
DOI - 10.1029/94jb03097
Subject(s) - a priori and a posteriori , posterior probability , monte carlo method , probability distribution , maximum a posteriori estimation , computer science , inverse problem , mathematics , algorithm , sampling (signal processing) , prior probability , mathematical optimization , importance sampling , marginal distribution , probabilistic logic , bayesian probability , statistics , random variable , mathematical analysis , maximum likelihood , philosophy , epistemology , filter (signal processing) , computer vision
Probabilistic formulation of inverse problems leads to the definition of a probability distribution in the model space. This probability distribution combines a priori information with new information obtained by measuring some observable parameters (data). As, in the general case, the theory linking data with model parameters is nonlinear, the a posteriori probability in the model space may not be easy to describe (it may be multimodal, some moments may not be defined, etc.). When analysing an inverse problem, obtaining a maximum likelihood model is usually not sufficient, as we normally also wish to have information on the resolution power of the data. In the general case we may have a large number of model parameters, and an inspection of the marginal probability densities of interest may be impractical, or even useless. But it is possible to pseudorandomly generate a large collection of models according to the posterior probability distribution and to analyse and display the models in such a way that information on the relative likelihoods of model properties is conveyed to the spectator. This can be accomplished by means of an efficient Monte Carlo method, even in cases where no explicit formula for the a priori distribution is available. The most well known importance sampling method, the Metropolis algorithm, can be generalized, and this gives a method that allows analysis of (possibly highly nonlinear) inverse problems with complex a priori information and data with an arbitrary noise distribution.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom