Research Library

Premium OpenMP‐oriented applications for distributed shared memory architectures
Marowka Ami,
Liu Zhenying,
Chapman Barbara
Publication year2004
Publication title
concurrency and computation: practice and experience
Resource typeJournals
PublisherJohn Wiley & Sons
Abstract The rapid rise of OpenMP as the preferred parallel programming paradigm for small‐to‐medium scale parallelism could slow unless OpenMP can show capabilities for becoming the model‐of‐choice for large scale high‐performance parallel computing in the coming decade. The main stumbling block for the adaptation of OpenMP to distributed shared memory (DSM) machines, which are based on architectures like cc‐NUMA, stems from the lack of capabilities for data placement among processors and threads for achieving data locality. The absence of such a mechanism causes remote memory accesses and inefficient cache memory use, both of which lead to poor performance. This paper presents a simple software programming approach called copy‐inside–copy‐back (CC) that exploits the data privatization mechanism of OpenMP for data placement and replacement. This technique enables one to distribute data manually without taking away control and flexibility from the programmer and is thus an alternative to the automat and implicit approaches. Moreover, the CC approach improves on the OpenMP‐SPMD style of programming that makes the development process of an OpenMP application more structured and simpler. The CC technique was tested and analyzed using the NAS Parallel Benchmarks on SGI Origin 2000 multiprocessor machines. This study shows that OpenMP improves performance of coarse‐grained parallelism, although a fast copy mechanism is essential. Copyright © 2004 John Wiley & Sons, Ltd.
Subject(s)cache , cache algorithms , cache coherence , computer science , cpu cache , distributed computing , distributed memory , distributed shared memory , linguistics , locality , memory hierarchy , memory management , operating system , overlay , parallel computing , parallelism (grammar) , philosophy , programmer , programming language , programming paradigm , programming style , shared memory , spmd , task parallelism , uniform memory access
SCImago Journal Rank0.309

Seeing content that should not be on Zendy? Contact us.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here