Premium
On the preconditioned conjugate gradient method for solving (A – λB)X =0
Author(s) -
Papadrakakis Manolis,
Yakoumidakis Michalis
Publication year - 1987
Publication title -
international journal for numerical methods in engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.421
H-Index - 168
eISSN - 1097-0207
pISSN - 0029-5981
DOI - 10.1002/nme.1620240711
Subject(s) - preconditioner , conjugate gradient method , mathematics , rayleigh quotient , eigenvalues and eigenvectors , rate of convergence , rayleigh quotient iteration , conjugate residual method , derivation of the conjugate gradient method , convergence (economics) , iterative method , positive definite matrix , mathematical optimization , computer science , gradient descent , physics , computer network , channel (broadcasting) , quantum mechanics , machine learning , artificial neural network , economics , economic growth
Classical iterative methods when applied to the partial solution of the generalized eigenvalue problem Ax =λ Bx , may yield very poor convergence rates particularly when ill‐conditioned problems are considered. In this paper the preconditioned conjugate gradient (CG) method via the minimization of the Rayleigh quotient and the reverse power method is employed for the partial eigenproblem. The triangular splitting preconditioners employed are obtained from an incomplete Choleski factorization and a partial Evans preconditioner. This approach can dramatically improve the convergence rate of the basic CG method and is applicable to any symmetric eigenproblem in which one of the matrices A , B is positive definite. Because of the renewed interest in CG techniques for FE work on microprocessors and parallel computers, it is believed that this improved approach to the generalized eigenvalue problem is likely to be very promising.