
TEST SCENARIO SPECIFICATION LANGUAGE FOR MODEL-BASED TESTING
Author(s) -
Evelin Halling,
Jüri Vain,
Artem Boyarchuk,
Oleg Illiashenko
Publication year - 2019
Publication title -
computing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.184
H-Index - 11
eISSN - 2312-5381
pISSN - 1727-6209
DOI - 10.47839/ijc.18.4.1611
Subject(s) - computer science , executable , programming language , test case , semantics (computer science) , model transformation , scenario testing , scalability , random testing , specification language , system under test , test management approach , model based testing , test (biology) , transformation (genetics) , software , software system , artificial intelligence , database , software construction , machine learning , regression analysis , consistency (knowledge bases) , variety (cybernetics) , paleontology , biochemistry , chemistry , gene , biology
In mission critical systems a single failure might cause catastrophic consequences. This sets high expectations to timely detection of design faults and runtime failures. By traditional software testing methods the detection of deeply nested faults that occur sporadically is almost impossible. The discovery of such bugs can be facilitated by generating well-targeted test cases where the test scenario is explicitly specified. On the other hand, the excess of implementation details in manually crafted test scripts makes it hard to understand and to interpret the test results. This paper defines high-level test scenario specification language TDLTP for specifying complex test scenarios that are relevant for model-based testing of mission critical systems. The syntax and semantics of TDLTP operators are defined and the transformation rules that map its declarative expressions to executable Uppaal Timed Automata test models are specified. The scalability of the method is demonstrated on the TUT100 satellite software integration testing case study.