z-logo
Premium
The effect of fault size on testing
Author(s) -
Bache Richard
Publication year - 1997
Publication title -
software testing, verification and reliability
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.216
H-Index - 49
eISSN - 1099-1689
pISSN - 0960-0833
DOI - 10.1002/(sici)1099-1689(199709)7:3<139::aid-stvr136>3.0.co;2-r
Subject(s) - reliability engineering , computer science , reliability (semiconductor) , software reliability testing , non regression testing , identification (biology) , fault (geology) , process (computing) , software performance testing , test strategy , measure (data warehouse) , software , software quality , data mining , software development , engineering , software construction , biology , power (physics) , physics , botany , quantum mechanics , seismology , programming language , geology , operating system
Software fault size (meaning the frequency of activation) is important when determining the merits of different testing methods. One of the purposes of testing is argued to be the identification of faults which, when removed, will contribute increases in reliability. The concept of size‐effectiveness is defined here to distinguish those testing methods which are better at finding large faults. Methods of different size‐effectiveness can be compared by measuring the distribution of sizes through a measure called the ‘operational testing efficiency ratio’ (OTER). This has significant implications for the way that one should view testing in particular and the software development process in general. © 1997 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here