z-logo
open-access-imgOpen Access
Design Patterns for Sparse-Matrix Computations on Hybrid CPU/GPU Platforms
Author(s) -
Valeria Cardellini,
Salvatore Filippone,
Damian Rouson
Publication year - 2014
Publication title -
scientific programming
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.269
H-Index - 36
eISSN - 1875-919X
pISSN - 1058-9244
DOI - 10.1155/2014/469753
Subject(s) - computer science , parallel computing , speedup , sparse matrix , xeon phi , computation , multiplication (music) , software , computational science , matrix multiplication , cuda , double precision floating point format , matrix (chemical analysis) , xeon , throughput , algorithm , operating system , mathematics , physics , materials science , quantum mechanics , combinatorics , composite material , quantum , gaussian , wireless
We apply object-oriented software design patterns to develop code for scientific software involving sparse matrices. Design patterns arise when multiple independent developments produce similar designs which converge onto a generic solution. We demonstrate how to use design patterns to implement an interface for sparse matrix computations on NVIDIA GPUs starting from PSBLAS, an existing sparse matrix library, and from existing sets of GPU kernels for sparse matrices. We also compare the throughput of the PSBLAS sparse matrix–vector multiplication on two platforms exploiting the GPU with that obtained by a CPU-only PSBLAS implementation. Our experiments exhibit encouraging results regarding the comparison between CPU and GPU executions in double precision, obtaining a speedup of up to 35.35 on NVIDIA GTX 285 with respect to AMD Athlon 7750, and up to 10.15 on NVIDIA Tesla C2050 with respect to Intel Xeon X5650.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom