
Efficient out‐of‐GPU memory strategies for solving matrix equation generated by method of moments
Author(s) -
Topa T.
Publication year - 2015
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/el.2015.2175
Subject(s) - lu decomposition , computer science , parallel computing , solver , matrix decomposition , computational science , method of moments (probability theory) , linear algebra , incomplete lu factorization , mathematics , estimator , eigenvalues and eigenvectors , physics , statistics , geometry , quantum mechanics , programming language
The numerical solution of the dense linear complex valued system of equations generated by the method of moments (MoMs) generally proceeds by factoring the impedance matrix into LU decomposition. Depending on available hardware resources, the LU algorithm can be executed either on sequential or parallel computers. A straightforward parallel implementation of LU factorisation does not yield a well distributed workload, and therefore it is the computationally most expensive step of the MoMs process, especially when adapting to the GPU technology. Some performance improvement of LU decomposition can be achieved by applying a hybrid approach to the parallel processing model. In this reported work, the problem of accelerating an out‐of‐core‐like LU solver on a heterogeneous low‐cost single GPU/CPU computing platform is addressed. For this, a variable panel‐width tuning scheme combined with a hybrid panel‐based LU decomposition method is employed, which is something of a novelty in the development of dense linear algebra software. To demonstrate the efficiency of the proposed approach some numerical results are provided.