z-logo
Premium
Comparison of GPU architectures for asynchronous communication with finite‐differencing applications
Author(s) -
Playne D. P.,
Hawick K. A.
Publication year - 2012
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.1726
Subject(s) - computer science , parallel computing , cuda , asynchronous communication , speedup , computational science , central processing unit , multi core processor , grid , computer hardware , mathematics , computer network , geometry
SUMMARY Graphical processing units (GPUs) are good data‐parallel performance accelerators for solving regular mesh partial differential equations (PDEs) whereby low‐latency communications and high compute to communications ratios can yield very high levels of computational efficiency. Finite‐difference time‐domain methods still play an important role for many PDE applications. Iterative multi‐grid and multilevel algorithms can converge faster than ordinary finite‐difference methods but can be much more difficult to parallelize with GPU memory constraints. We report on some practical algorithmic and data layout approaches and on performance data on a range of GPUs with CUDA. We focus on the use of multiple GPU devices with a single CPU host and the asynchronous CPU/GPU communications issues involved. We obtain more than two orders of magnitude of speedup over a comparable CPU core. Copyright © 2011 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here