Premium
The Speedup‐Test: a statistical methodology for programme speedup analysis and computation
Author(s) -
Touati SidAhmedAli,
Worms Julien,
Briais Sébastien
Publication year - 2013
Publication title -
concurrency and computation: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.309
H-Index - 67
eISSN - 1532-0634
pISSN - 1532-0626
DOI - 10.1002/cpe.2939
Subject(s) - speedup , computer science , benchmark (surveying) , parallel computing , computation , set (abstract data type) , algorithm , programming language , geodesy , geography
SUMMARY In the area of high‐performance computing and embedded systems, numerous code optimisation methods exist to accelerate the speed of the computation (or optimise another performance criteria). They are usually experimented by doing multiple observations of the initial and the optimised execution times of a programme in order to declare a speedup. Even with fixed input and execution environment, programme execution times vary in general. Hence, different kinds of speedups may be reported: the speedup of the average execution time, the speedup of the minimal execution time, the speedup of the median and others. Many published speedups in the literature are observations of a set of experiments. To improve the reproducibility of the experimental results, this article presents a rigorous statistical methodology regarding programme performance analysis. We rely on well‐known statistical tests (Shapiro–Wilk's test, Fisher's F ‐test, Student's t ‐test, Kolmogorov–Smirnov's test and Wilcoxon–Mann–Whitney's test) to study if the observed speedups are statistically significant or not. By fixing 0 < α < 1 a desired risk level, we are able to analyse the statistical significance of the average execution time as well as the median. We can also check if P X > Y > 1 ∕ 2 , the probability that an individual execution of the optimised code is faster than the individual execution of the initial code. In addition, we can compute the confidence interval of the probability to obtain a speedup on a randomly selected benchmark that does not belong to the initial set of tested benchmarks. Our methodology defines a consistent improvement compared with the usual performance analysis method in high‐performance computing. We explain in each situation the hypothesis that must be checked to declare a correct risk level for the statistics. The Speedup‐Test protocol certifying the observed speedups with rigorous statistics is implemented and distributed as an open source tool based on R software. Copyright © 2012 John Wiley & Sons, Ltd.