Krylov Subspace Methods on Supercomputers
Author(s) -
Yousef Saad
Publication year - 1989
Publication title -
siam journal on scientific and statistical computing
Language(s) - English
Resource type - Journals
eISSN - 2168-3417
pISSN - 0196-5204
DOI - 10.1137/0910073
Subject(s) - krylov subspace , vectorization (mathematics) , computer science , conjugate gradient method , factorization , scalar (mathematics) , implementation , iterative method , parallel computing , conjugate residual method , algorithm , mathematics , gradient descent , geometry , machine learning , artificial neural network , programming language
This paper presents a short survey of recent research on Krylov subspace methods with emphasis on implementation on vector and parallel computers. Conjugate gradient methods have proven very useful on traditional scalar computers, and their popularity is likely to increase as three-dimensional models gain importance. A conservative approach to derive effective iterative techniques for supercomputers has been to find efficient parallel/vector implementations of the standard algorithms. The main source of difficulty in the incomplete factorization preconditionings is in the solution of the triangular systems at each step. A few approaches consisting of implementing efficient forward and backward triangular solutions are described in detail. Then polynomial preconditioning as an alternative to standard incomplete factorization techniques is discussed. Another efficient approach is to reorder the equations so as to improve the structure of the matrix to achieve better parallelism or vectorization. An overview of these ideas and others is given in this article, as well as an attempt to comment on their effectiveness or potential for different types of architectures.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom