z-logo
Premium
Testing for Imperfect Debugging in Software Reliability
Author(s) -
Slud Eric
Publication year - 1997
Publication title -
scandinavian journal of statistics
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.359
H-Index - 65
eISSN - 1467-9469
pISSN - 0303-6898
DOI - 10.1111/1467-9469.00081
Subject(s) - mathematics , software quality , normalization (sociology) , software , statistics , debugging , order statistic , algorithm , computer science , software development , programming language , sociology , anthropology
This paper continues the study of the software reliability model of Fakhre‐Zakeri & Slud (1995), an “exponential order statistic model” in the sense of Miller (1986) with general mixing distribution, imperfect debugging and large‐sample asymptotics reflecting increase of the initial number of bugs with software size. The parameters of the model are θ (proportional to the initial number of bugs in the software), G (·, μ) (the mixing df, with finite dimensional unknown parameter μ, for the rates λ i with which the bugs in the software cause observable system failures), and p (the probability with which a detected bug is instantaneously replaced with another bug instead of being removed). Maximum likelihood estimation theory for (θ, p , μ) is applied to construct a likelihood‐based score test for large sample data of the hypothesis of “perfect debugging” ( p = 0) vs “imperfect” ( p > 0) within the models studied. There are important models (including the Jelinski–Moranda) under which the score statistics with 1/√ n normalization are asymptotically degenerate. These statistics, illustrated on a software reliability data of Musa (1980), can serve nevertheless as important diagnostics for inadequacy of simple models

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here