The complexity of software testing
Author(s) -
J. Paul Myers
Publication year - 1992
Publication title -
software engineering journal
Language(s) - English
Resource type - Journals
eISSN - 2053-910X
pISSN - 0268-6961
DOI - 10.1049/sej.1992.0002
Subject(s) - computer science , metric (unit) , non regression testing , software reliability testing , software performance testing , software metric , measure (data warehouse) , software engineering , absurdity , software , reliability engineering , software construction , software system , data mining , programming language , engineering , linguistics , operations management , philosophy
The futility of using a general-purpose metric to characterise ‘the’ complexity of a program has recently been argued to support the design of specific metrics for the different stages of the software life-cycle. An analysis of the module testing activity is performed, providing evidence of the absurdity of all-purpose metrics, as well as a methodical means with which to measure testing complexity. Several standard metrics are seen to serve as component measures for the intricacies of testing. The methodology is applied to compare traditional and adaptive means of testing. It is shown that previous informal arguments asserting the superiority of adaptive methodologies are formally confirmed.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom