Premium
Evaluating Pred( p ) and standardized accuracy criteria in software development effort estimation
Author(s) -
Idri Ali,
Abnane Ibtissam,
Abran Alain
Publication year - 2018
Publication title -
journal of software: evolution and process
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.371
H-Index - 29
eISSN - 2047-7481
pISSN - 2047-7473
DOI - 10.1002/smr.1925
Subject(s) - measure (data warehouse) , consistency (knowledge bases) , software , computer science , estimation , statistics , data mining , mathematics , artificial intelligence , management , economics , programming language
Software development effort estimation (SDEE) plays a primary role in software project management. But choosing the appropriate SDEE technique remains elusive for many project managers and researchers. Moreover, the choice of a reliable estimation accuracy measure is crucial because SDEE techniques behave differently given different accuracy measures. The most widely used accuracy measures in SDEE are those based on magnitude of relative error (MRE) such as mean/median MRE (MMRE/MedMRE) and prediction at level p (Pred( p )), which counts the number of observations where an SDEE technique gave MREs lower than p . However, MRE has proven to be an unreliable accuracy measure, favoring SDEE techniques that underestimate. Consequently, an unbiased measure called standardized accuracy (SA) has been proposed. This paper deals with the Pred( p ) and SA measures. We investigate (1) the consistency of Pred( p ) and SA as accuracy measures and SDEE technique selectors, and (2) the relationship between Pred( p ) and SA. The results suggest that Pred( p ) is less biased towards underestimates and generally selects the same best technique as SA. Moreover, SA and Pred( p ) measure different aspects of technique performance, and SA may be used as a predictor of Pred( p ) by means of the 3 association rules.