z-logo
Premium
Fast multidimensional reduction and broadcast operations on GPU for machine learning
Author(s) -
Dikbayır Doğa,
Çoban Enis Berk,
Kesen İlker,
Yuret Deniz,
Unat Didem
Publication year - 2018
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.4691
Subject(s) - computer science , implementation , exploit , reduction (mathematics) , cuda , parallel computing , deep learning , tensor (intrinsic definition) , computer engineering , artificial intelligence , programming language , mathematics , geometry , computer security , pure mathematics
Summary Reduction and broadcast operations are commonly used in machine learning algorithms for different purposes. They widely appear in the calculation of the gradient values of a loss function, which are one of the core structures of neural networks. Both operations are implemented naively in many libraries usually for scalar reduction or broadcast; however, to our knowledge, there are no optimized multidimensional implementations available. This fact limits the performance of machine learning models requiring these operations to be performed on tensors. In this work, we address the problem and propose two new strategies that extend the existing implementations to perform on tensors. We introduce formal definitions of both operations using tensor notations, investigate their mathematical properties, and exploit these properties to provide an efficient solution for each. We implement our parallel strategies and test them on a CUDA enabled Tesla K40 m GPU accelerator. Our performant implementations achieve up to 75% of the peak device memory bandwidth on different tensor sizes and dimensions. Significant speedups against the implementations available in the Knet Deep Learning framework are also achieved for both operations.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here