Premium
A GPU‐Adapted Structure for Unstructured Grids
Author(s) -
Zayer Rhaleb,
Steinberger Markus,
Seidel HansPeter
Publication year - 2017
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.13144
Subject(s) - computer science , vectorization (mathematics) , transpose , parallel computing , matrix representation , grid , multiplication (music) , graphics , sparse matrix , data structure , computation , linear algebra , matrix multiplication , theoretical computer science , computational science , unstructured data , computer graphics (images) , algorithm , programming language , eigenvalues and eigenvectors , gaussian , physics , chemistry , geometry , mathematics , organic chemistry , quantum mechanics , quantum , group (periodic table) , big data , acoustics , operating system
A key advantage of working with structured grids (e.g., images) is the ability to directly tap into the powerful machinery of linear algebra. This is not much so for unstructured grids where intermediate bookkeeping data structures stand in the way. On modern high performance computing hardware, the conventional wisdom behind these intermediate structures is further challenged by costly memory access, and more importantly by prohibitive memory resources on environments such as graphics hardware. In this paper, we bypass this problem by introducing a sparse matrix representation for unstructured grids which not only reduces the memory storage requirements but also cuts down on the bulk of data movement from global storage to the compute units. In order to take full advantage of the proposed representation, we augment ordinary matrix multiplication by means of action maps, local maps which encode the desired interaction between grid vertices. In this way, geometric computations and topological modifications translate into concise linear algebra operations. In our algorithmic formulation, we capitalize on the nature of sparse matrix‐vector multiplication which allows avoiding explicit transpose computation and storage. Furthermore, we develop an efficient vectorization to the demanding assembly process of standard graph and finite element matrices.