z-logo
open-access-imgOpen Access
Obtaining smooth solutions to large, linear, inverse problems
Author(s) -
J. C. VanDecar,
Roel Snieder
Publication year - 1994
Publication title -
geophysics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.178
H-Index - 172
eISSN - 1942-2156
pISSN - 0016-8033
DOI - 10.1190/1.1443640
Subject(s) - regularization (linguistics) , conjugate gradient method , linearization , mathematics , inverse problem , underdetermined system , parameterized complexity , overdetermined system , matrix (chemical analysis) , inverse , computer science , mathematical analysis , mathematical optimization , algorithm , nonlinear system , physics , geometry , materials science , quantum mechanics , artificial intelligence , composite material
It is not uncommon now for geophysical inverse problems to be parameterized by 10 4 to 10 5 unknowns associated with upwards of 10 6 to 10 7 data con straints. The matrix problem defining the linearization of such a system (e.g., ~m = b) is usually solved with a least-squares criterion (m = (~t~ -) ~ t b). The size of the matrix, however, discourages the direct solution of the system and researchers often tum to iterative techniques such as the method of conjugate gradients to obtain an estimate of the least-squares solution. These iterative methods take advantage of the sparse ness of ~, which often has as few as 2-3 percent of its elements nonzero, and do not require the calculation (or storage) of the matrix ~t~. Although there are usually many more data constraints than unknowns, these problems are, in general, underdetermined and therefore require some sort of regularization to obtain a solution. When the regularization is simple damping, the conjugate gradients method tends to converge in relatively few iterations. However, when derivative-type regularization is applied (first derivative constraints to obtain the flattest model that fits the data; second deriv ative to obtain the smoothest), the convergence of parts ofthe solution may be drastically inhibited. In a series of I-D examples and a synthetic 2-D crosshole tomography example, we demonstrate this problem and also suggest a method of accelerating the convergence through the preconditioning of the conjugate gradient search direc tions. We derive a I-D preconditioning operator for the case of first derivative regularization using a WKBJ approximation. We have found that preconditioning can reduce the number of iterations necessary to obtain satisfactory convergence by up to an order of magnitude. The conclusions we present are also relevant to Baye sian inversion, where a smoothness constraint is im posed through an a priori covariance of the model.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom