z-logo
open-access-imgOpen Access
Parallel Newton–Chebyshev polynomial preconditioners for the conjugate gradient method
Author(s) -
Bergamaschi Luca,
Martinez Calomardo Angeles
Publication year - 2021
Publication title -
computational and mathematical methods
Language(s) - English
Resource type - Journals
ISSN - 2577-7408
DOI - 10.1002/cmm4.1153
Subject(s) - conjugate gradient method , conjugate residual method , derivation of the conjugate gradient method , chebyshev polynomials , mathematics , convergence (economics) , eigenvalues and eigenvectors , polynomial , chebyshev filter , chebyshev nodes , matrix (chemical analysis) , chebyshev iteration , connection (principal bundle) , chebyshev equation , mathematical optimization , mathematical analysis , computer science , orthogonal polynomials , gradient descent , geometry , classical orthogonal polynomials , physics , materials science , quantum mechanics , machine learning , artificial neural network , economics , composite material , economic growth
In this note, we exploit polynomial preconditioners for the conjugate gradient method to solve large symmetric positive definite linear systems in a parallel environment. We put in connection a specialized Newton method to solve the matrix equation X −1  =  A and the Chebyshev polynomials for preconditioning. We propose a simple modification of one parameter which avoids clustering of extremal eigenvalues in order to speed‐up convergence. We provide results on very large matrices (up to 8.6 billion unknowns) in a parallel environment showing the efficiency of the proposed class of preconditioners.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here