z-logo
Premium
Comparison of adaptive random testing and random testing under various testing and debugging scenarios
Author(s) -
Liu Huai,
Kuo FeiChing,
Chen Tsong Yueh
Publication year - 2012
Publication title -
software: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.437
H-Index - 70
eISSN - 1097-024X
pISSN - 0038-0644
DOI - 10.1002/spe.1113
Subject(s) - random testing , orthogonal array testing , computer science , non regression testing , debugging , computerized adaptive testing , software performance testing , white box testing , black box testing , reliability engineering , integration testing , test strategy , software testing , risk based testing , software , acceptance testing , manual testing , test case , machine learning , statistics , mathematics , engineering , programming language , software system , software engineering , software construction , regression analysis , psychometrics
SUMMARY Adaptive random testing is an enhancement of random testing. Previous studies on adaptive random testing assumed that once a failure is detected, testing is terminated and debugging is conducted immediately. It has been shown that adaptive random testing normally uses fewer test cases than random testing for detecting the first software failure. However, under many practical situations, testing should not be withheld after the detection of a failure. Thus, it is important to investigate the effectiveness with respect to the detection of multiple failures. In this paper, we compare adaptive random testing and random testing under various scenarios and examine whether adaptive random testing is still able to use fewer test cases than random testing to detect multiple software failures. Our study delivers some interesting results and highlights a number of promising research projects. Copyright © 2011 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here