z-logo
Premium
A domain‐specific high‐level programming model
Author(s) -
Mansouri Farouk,
Huet Sylvain,
Houzet Dominque
Publication year - 2015
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.3622
Subject(s) - computer science , xeon phi , parallel computing , graph , data parallelism , abstraction , task parallelism , dataflow , computer architecture , task (project management) , domain (mathematical analysis) , domain specific language , programming paradigm , parallelism (grammar) , software , model of computation , instruction level parallelism , multi core processor , data flow diagram , computation , programming language , theoretical computer science , database , mathematical analysis , philosophy , mathematics , management , epistemology , economics
Summary Nowadays, computing hardware continues to move toward more parallelism and more heterogeneity, to obtain more computing power. From personal computers to supercomputers, we can find several levels of parallelism expressed by the interconnections of multi‐core and many‐core accelerators. On the other hand, computing software needs to adapt to this trend, and programmers can use parallel programming models (PPM) to fulfil this difficult task. There are different PPMs available that are based on tasks, directives, or low‐level languages or library. These offer higher or lower abstraction levels from the architecture by handling their own syntax. However, to offer an efficient PPM with a greater (additional) high‐level abstraction level while saving on performance, one idea is to restrict this to a specific domain and to adapt it to a family of applications. In the present study, we propose a high‐level PPM specific to digital signal‐processing applications. It is based on data‐flow graph models of computation, and a dynamic run‐time model of execution (StarPU). We show how the user can easily express this digital signal‐processing application and can take advantage of task, data, and graph parallelism in the implementation, to enhance the performances of targeted heterogeneous clusters composed of CPUs and different accelerators (e.g., GPU and Xeon Phi). Copyright © 2015 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here