z-logo
Premium
Software testing using model programs
Author(s) -
Manolache L. I.,
Kourie D. G.
Publication year - 2001
Publication title -
software: practice and experience
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.437
H-Index - 70
eISSN - 1097-024X
pISSN - 0038-0644
DOI - 10.1002/spe.409
Subject(s) - computer science , oracle , non regression testing , keyword driven testing , outcome (game theory) , context (archaeology) , regression testing , test strategy , white box testing , functional testing , manual testing , software , model based testing , acceptance testing , software reliability testing , software testing , test (biology) , reliability engineering , test case , software engineering , software quality , software development , programming language , machine learning , software construction , engineering , mathematics , regression analysis , paleontology , mathematical economics , testability , biology
A strategy described as ‘testing using M model programs’ (abbreviated to ‘ M ‐mp testing’) is investigated as a practical alternative to software testing based on manual outcome prediction. A model program implements suitably selected parts of the functional specification of the software to be tested. The M ‐mp testing strategy requires that M ( M ≥ 1) model programs as well as the program under test, P , should be independently developed. P and the M model programs are then subjected to the same test data. Difference analysis is conducted on the outputs and appropriate corrective action is taken. P and the M model programs jointly constitute an approximate test oracle. Both M ‐mp testing and manual outcome prediction are subject to the possibility of correlated failure. In general, the suitability of M ‐mp testing in a given context will depend on whether building and maintaining model programs is likely to be more cost effective than manually pre‐calculating P 's expected outcomes for given test data. In many contexts, M ‐mp testing could also facilitate the attainment of higher test adequacy levels than would be possible with manual outcome prediction. A rigorous experiment in an industrial context is described in which M ‐mp testing (with M = 1) was used to test algorithmically complex scheduling software. In this case, M ‐mp testing turned out to be significantly more cost effective than testing based on manual outcome prediction. Copyright © 2001 John Wiley & Sons, Ltd.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom