Premium
Non‐significant results in ecology: a burden or a blessing in disguise?
Author(s) -
Julia Koricheva
Publication year - 2003
Publication title -
oikos
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.672
H-Index - 179
eISSN - 1600-0706
pISSN - 0030-1299
DOI - 10.1034/j.1600-0579.2003.12353.x
Subject(s) - blessing , citation , ecology , section (typography) , computer science , library science , philosophy , biology , theology , operating system
Null hypothesis significance testing remains a common practice in ecology, despite criticism by statisticians (Yoccoz 1991, Cohen 1994) and numerous suggested alternatives (Jones and Matloff 1986, Fernandez-Duque 1996, Parkhurst 2001). The preoccupation of scientists with the statistical significance of tests as a criterion of study value may lead to the under-reporting of nonsignificant (P 0.05) results in the published literature (the ‘file drawer problem’, Rosenthal 1979). This situation may arise either because non-significant results are not submitted for publication or because they are rejected in the review process. Less severe forms of bias against non-significant results (sometimes referred to as ‘dissemination bias’, Song et al. 2000) include time-lag bias (delayed publication) and place of publication bias (publication in low-circulation journals or in the form of technical reports, conference abstracts or dissertations). As a result, even when published, studies reporting non-significant results may be less accessible to researchers, undercited, and less likely to be indexed in major reference databases – and therefore more likely to go unnoticed. The under-reporting of non-significant results has long been suspected to occur in ecology (Csada et al. 1996), and potential problems caused by publication and dissemination bias against non-significant results for research synthesis, design of ecological experiments and the presentation of the results have been discussed repeatedly in Oikos (Csada et al. 1996, Bauchau 1997, Lortie and Dyer 1999, Kotiaho and Tomkins 2002). However, a recent review by Moller and Jennions (2001) demonstrated that the existing evidence of bias against non-significant results in ecology is based largely on indirect methods (e.g. analysis of the funnel plots or calculation of the fail-safe numbers) which are open to alternative interpretations. Direct evidence of publication bias comes from comparisons of results between unpublished and published studies and from follow-ups of the publication fate of a group of preregistered studies. Several surveys of this kind conducted in medicine have demonstrated that published studies often report a higher effectiveness of treatments than unpublished trials, and that the publication rate of trials providing significant results is considerably higher than those that found no significant differences among treatments (reviewed in Song et al. 2000). Similar analyses of ecological studies have not been undertaken so far because of the logistic problems in obtaining an unbiased sample of unpublished studies. Although some ecological meta-analyses included considerable number of unpublished studies and were able to compare the magnitude of effect size for published and unpublished studies (Thornhill et al. 1999, Jennions et al. 2001), most of the unpublished studies included in the above reviews were very recent, and might therefore be unpublished simply because they had only recently been conducted (Moller and Jennions 2001). Direct evidence of publication bias in ecology is therefore still lacking. In order to find out whether the statistical significance of the results affects the publication of ecological studies, I have followed up the fate of manuscripts from Finnish and Swedish doctoral dissertations on ecological topics. A typical PhD thesis in Finland and Sweden consists of several articles (usually 4–6) and a summary. Some of the papers included in the thesis may already have been published or accepted for publication, while others are in manuscript; these are often submitted and published after the PhD student obtains a doctoral degree. Given that PhD students usually have limited time for completion of their studies and little experience of conducting and writing up research, I assumed that all or at least most of the research conducted during the PhD studies is included in the thesis and, thus, Finnish and Swedish doctoral dissertations provide an unbiased source of completed but as