Premium
Improving performance of transactional memory through machine learning
Author(s) -
Xiao Yang,
Jeyakumaran Thireshan,
Atoofian Ehsan,
Jannesari Ali
Publication year - 2017
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.4397
Subject(s) - computer science , transactional memory , software transactional memory , benchmark (surveying) , exploit , overhead (engineering) , parallel computing , database transaction , software , embedded system , distributed computing , operating system , database , computer security , geodesy , geography
Summary Transactional memory (TM) is a programming paradigm that facilitates parallel programming for multi‐core processors. In the last few years, some chip manufacturers provided hardware support for TM to reduce runtime overhead of Software Transactional Memory (STM). In this work, we offer two optimization techniques for TMs. The first technique focuses on Restricted Transactional Memory (RTM) in Intel's Haswell processor and shows that while in some applications, RTM improves performance over STM, in some others, it falls behind STM. We exploit this variability and propose an adaptive technique that switches between RTM and STM, statically. The second technique focuses on the overhead of TM and enhances the speed of the adaptive system. In particular, we focus on the size of transactions and improve performance by changing the transaction size. Optimizing the transaction size manually is a time‐consuming process and requires significant software engineering effort. We use a combination of Linear Regression (LR) and decision tree to decide on the transaction size, automatically. We evaluate our optimization techniques using a set of benchmarks from NAS, DiscoPoP, and STAMP benchmark suites. Our experimental results reveal that our optimization techniques are able to improve the performance of TM programs by 9% and energy‐delay by 15%, on average.