Premium
Autotuning CUDA compiler parameters for heterogeneous applications using the OpenTuner framework
Author(s) -
Bruel Pedro,
Amarís Marcos,
Goldman Alfredo
Publication year - 2017
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.3973
Subject(s) - speedup , computer science , parallel computing , cuda , compiler , benchmark (surveying) , graphics processing unit , coprocessor , algorithm , programming language , geodesy , geography
Summary A Graphics Processing Unit (GPU) is a parallel computing coprocessor specialized in accelerating vector operations. The enormous heterogeneity of parallel computing platforms justifies and motivates the development of automated optimization tools and techniques. The Algorithm Selection Problem consists in finding a combination of algorithms, or a configuration of an algorithm, that optimizes the solution of a set of problem instances. An autotuner solves the Algorithm Selection Problem using search and optimization techniques. In this paper, we implement an autotuner for the Compute Unified Device Architecture compiler's parameters using the OpenTuner framework. The autotuner searches for a set of compilation parameters that optimizes the time to solve a problem. We analyze the performance speedups, in comparison with high‐level compiler optimizations, achieved in three different GPU devices, for 17 heterogeneous GPU applications, 12 of which are from the Rodinia Benchmark Suite. The autotuner often beats the compiler's high‐level optimizations, but underperformed for some problems. We achieved over 2x speedup for Gaussian Elimination and almost 2x speedup for Heart Wall , both problems from the Rodinia Benchmark, and over 4x speedup for a matrix multiplication algorithm. Copyright © 2017 John Wiley & Sons, Ltd.