Premium
A new relaxation method for obtaining the lowest eigenvalue and eigenvector of a matrix equation
Author(s) -
Muda Yoshiaki
Publication year - 1973
Publication title -
international journal for numerical methods in engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.421
H-Index - 168
eISSN - 1097-0207
pISSN - 0029-5981
DOI - 10.1002/nme.1620060407
Subject(s) - eigenvalues and eigenvectors , mathematics , rate of convergence , matrix (chemical analysis) , convergence (economics) , mathematical analysis , generalized eigenvector , relaxation (psychology) , combinatorics , inverse iteration , symmetric matrix , physics , chemistry , quantum mechanics , state transition matrix , computer science , psychology , social psychology , chromatography , economics , economic growth , computer network , channel (broadcasting)
The matrix eigenvalue problem Hu i = λ i u i is considered. It is shown that when a new approximate vector v ( n +1) to u 1 (the eigenvector of the lowest eigenvalue) is computed from the present one v (n) by the relation v( n +1) = (1− αH + βH 2 ) v (n) or v (n+1) = (1− αH + βH 2 – γH 3 ) v (n) , the convergence rate is at least double that of the gradient method which corresponds to set β = γ = 0. Moreover, by choosing parameters α, β, or γ properly, one can get about three to five times faster convergence rate than that of the latter method, for H having very small γ 2 –γ 1 and very large λ N (the largest eigenvalue), further modifications are suggested. The relation with the Richardson method is also discussed.