z-logo
Premium
Communication in task‐parallel ILU‐preconditioned CG solvers using MPI + OmpSs
Author(s) -
Aliaga José I.,
Barreda María,
Flegar Goran,
Bollhöfer Matthias,
QuintanaOrtí Enrique S.
Publication year - 2017
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.4280
Subject(s) - computer science , parallel computing , krylov subspace , scalability , multi core processor , implementation , supercomputer , message passing , task (project management) , distributed memory , sparse matrix , shared memory , iterative method , algorithm , database , programming language , physics , management , quantum mechanics , economics , gaussian
Summary We target the parallel solution of sparse linear systems via iterative Krylov subspace–based methods enhanced with incomplete LU (ILU)‐type preconditioners on clusters of multicore processors. In order to tackle large‐scale problems, we develop task‐parallel implementations of the classical iteration for the CG method, accelerated via ILUPACK and ILU(0) preconditioners, using MPI + OmpSs. In addition, we integrate several communication‐avoiding strategies into the codes, including the butterfly communication scheme and Eijkhout's formulation of the CG method. For all these implementations, we analyze the communication patterns and perform a comparative analysis of their performance and scalability on a cluster consisting of 16 nodes, with 16 cores each.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here