OpenBenchmark: Repeatable and Reproducible Internet of Things Experimentation on Testbeds
Author(s) -
Malisa Vucinic,
Bozidar Skrbic,
Enis Kocan,
Milica Pejanovic-Djurisic,
Thomas Watteyne
Publication year - 2019
Publication title -
ieee infocom 2019 - ieee conference on computer communications workshops (infocom wkshps)
Language(s) - English
Resource type - Conference proceedings
ISBN - 978-1-7281-1878-9
DOI - 10.1109/infcomw.2019.8845160
Subject(s) - communication, networking and broadcast technologies
Experimentation on testbeds with Internet of Things (IoT) devices is hard. The tedious firmware development, the lack of user interfaces, the stochastic nature of the radio channel, the testbed learning curve, are some of the factors that make the evaluation process error prone. The impact of such errors on published results can be quite unfortunate, leading to misconclusions and false common wisdom. The selection of experiment conditions or performance metrics to evaluate one's own proposal may not lead to perfectly fair comparisons with state-of-the-art, either. Our research community is well aware of these problems and is actively working on solutions. We present OpenBenchmark, a cloud-based, reproducible, repeatable and comparable IoT benchmarking service. OpenBenchmark facilitates and improves the IoT experimentation workflow: it runs the experiments on supported testbeds, instruments the supported firmware according to the industry-relevant test scenarios, and collects and processes the experiment data to produce Key Performance Indicators (KPIs). This paper introduces the OpenBenchmark platform, discusses its applicability, design and implementation.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom