Premium
Standardized or simple effect size: What should be reported?
Author(s) -
Baguley Thom
Publication year - 2009
Publication title -
british journal of psychology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.536
H-Index - 92
eISSN - 2044-8295
pISSN - 0007-1269
DOI - 10.1348/000712608x377117
Subject(s) - psychology , statistic , sample size determination , context (archaeology) , statistics , strictly standardized mean difference , preference , simple (philosophy) , point (geometry) , metric (unit) , confidence interval , mathematics , philosophy , operations management , geometry , epistemology , economics , biology , paleontology
It is regarded as best practice for psychologists to report effect size when disseminating quantitative research findings. Reporting of effect size in the psychological literature is patchy – though this may be changing – and when reported it is far from clear that appropriate effect size statistics are employed. This paper considers the practice of reporting point estimates of standardized effect size and explores factors such as reliability, range restriction and differences in design that distort standardized effect size unless suitable corrections are employed. For most purposes simple (unstandardized) effect size is more robust and versatile than standardized effect size. Guidelines for deciding what effect size metric to use and how to report it are outlined. Foremost among these are: (i) a preference for simple effect size over standardized effect size, and (ii) the use of confidence intervals to indicate a plausible range of values the effect might take. Deciding on the appropriate effect size statistic to report always requires careful thought and should be influenced by the goals of the researcher, the context of the research and the potential needs of readers.