z-logo
Premium
New bounds on the condition number of the Hessian of the preconditioned variational data assimilation problem
Author(s) -
Tabeart Jemima M.,
Dance Sarah L.,
Lawless Amos S.,
Nichols Nancy K.,
Waller Joanne A.
Publication year - 2022
Publication title -
numerical linear algebra with applications
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.02
H-Index - 53
eISSN - 1099-1506
pISSN - 1070-5325
DOI - 10.1002/nla.2405
Subject(s) - hessian matrix , data assimilation , condition number , mathematics , eigenvalues and eigenvectors , weighting , covariance , conjugate gradient method , covariance matrix , numerical weather prediction , hessian equation , mathematical optimization , convergence (economics) , algorithm , statistics , mathematical analysis , partial differential equation , meteorology , quantum mechanics , first order partial differential equation , economics , radiology , economic growth , medicine , physics
Data assimilation algorithms combine prior and observational information, weighted by their respective uncertainties, to obtain the most likely posterior of a dynamical system. In variational data assimilation the posterior is computed by solving a nonlinear least squares problem. Many numerical weather prediction (NWP) centers use full observation error covariance (OEC) weighting matrices, which can slow convergence of the data assimilation procedure. Previous work revealed the importance of the minimum eigenvalue of the OEC matrix for conditioning and convergence of the unpreconditioned data assimilation problem. In this article we examine the use of correlated OEC matrices in the preconditioned data assimilation problem for the first time. We consider the case where there are more state variables than observations, which is typical for applications with sparse measurements, for example, NWP and remote sensing. We find that similarly to the unpreconditioned problem, the minimum eigenvalue of the OEC matrix appears in new bounds on the condition number of the Hessian of the preconditioned objective function. Numerical experiments reveal that the condition number of the Hessian is minimized when the background and observation lengthscales are equal. This contrasts with the unpreconditioned case, where decreasing the observation error lengthscale always improves conditioning. Conjugate gradient experiments show that in this framework the condition number of the Hessian is a good proxy for convergence. Eigenvalue clustering explains cases where convergence is faster than expected.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here