Premium
Group‐SPMD programming with orthogonal processor groups
Author(s) -
Rauber Thomas,
Reilein Robert,
Rünger Gudula
Publication year - 2004
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.770
Subject(s) - computer science , spmd , parallel computing , task (project management) , scalability , set (abstract data type) , overhead (engineering) , programming style , grid , group (periodic table) , programming paradigm , parallel programming model , programming language , operating system , chemistry , geometry , mathematics , management , organic chemistry , economics
Many programs for message‐passing machines can benefit from an implementation in a group‐SPMD programming model due to the potential to reduce communication overhead and to increase scalability. In this paper, we consider group‐SPMD programs exploiting different orthogonal processor partitions in one program. For each program this is a fixed set of predefined processor partitions given by the parallel hyperplanes of a two‐ or multi‐dimensional virtual processor organization. We introduce a library built on top of MPI to support the programming with those orthogonal processor groups. The parallel programming model is appropriate for applications with a multi‐dimensional task grid and task dependencies mainly aligned in the dimensions of the task grid. The library can be used to specify the appropriate processor partitions, which are then created by the library, and to define the mapping of tasks to the processor hyperplanes. Examples from numerical analysis illustrate the programming style and show that the runtime on distributed memory machines can be considerably reduced by using the library. Copyright © 2004 John Wiley & Sons, Ltd.