z-logo
Premium
SymS: a symmetrical scheduler to improve multi‐threaded program performance on NUMA systems
Author(s) -
Zhu Liang,
Jin Hai,
Liao Xiaofei
Publication year - 2015
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.3638
Subject(s) - computer science , thread (computing) , operating system , scheduling (production processes) , parallel computing , distributed computing , locality , linguistics , operations management , philosophy , economics
Summary The nonuniform memory access (NUMA) architecture has been used extensively in data centers. Most of the previous works used single‐threaded multiprogrammed workloads to study the performance of NUMA systems, which mainly focus on two classes of problems: resource contention and data locality. However, when running multi‐threaded programs on NUMA systems, the critical thread of these programs significantly influences the system performance and brings new challenges that are different from those in a single‐threaded situation. In particular, an additional scheduling scheme is desired to avoid the performance degradation caused by the critical thread of multi‐threaded programs running on NUMA systems. This work presents a scheduler, Symmetrical Scheduler , which successfully solves the lagging problem by balancing the number of the costly remote shared data accesses for threads on NUMA systems. To the best of our knowledge, little work has been conducted to examine the performance impacted by the critical thread of multi‐threaded programs on NUMA systems. By running the PARSEC benchmark on such systems, our methodology can improve the program performance by a factor of 6% on average and achieve maximally 25.3% improvement compared with Linux kernel scheduling mechanism. Copyright © 2015 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here