Premium
Graphic illustration of a potential problem: a commentary on Morrissey (2016)
Author(s) -
Jennions M. D.
Publication year - 2016
Publication title -
journal of evolutionary biology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 1.289
H-Index - 128
eISSN - 1420-9101
pISSN - 1010-061X
DOI - 10.1111/jeb.12946
Subject(s) - hindsight bias , value (mathematics) , statistics , epistemology , mathematics , cognitive psychology , psychology , philosophy
Morrissey (2016) is an enjoyable but challenging read that highlights misapplication of meta-analysis to questions in evolutionary biology. The problems highlighted in the three case studies all arise when estimating the mean magnitude rather than the mean value of a relationship (i.e. using absolute rather than signed effect sizes). A statistical maven speaks, but the language remains technical, and the message might be lost, or worse, misunderstood. I therefore focused my efforts on summarizing some key messages in a form that I could use to teach students. My commentary is directed to such readers. The result is a cartoon (Fig. 1). I hope it provides accessible insights into the problems Morrissey raised. We can note the following: 1 Biased estimates of the mean magnitude of an effect arise whenever the estimated effect in a study is not in the same direction as the true effect (shown by the grey part of the sampling variance bar). This still contributes a positive estimate of the absolute effect size. The dark part of the bar and the bar above each line (which is the same length as the grey bar) shows the extent to which this creates an asymmetric in estimates of the absolute effect size. 2 Weighting studies by the inverse of their sampling variance, which is often closely linked to sample size (e.g. for Fisher’s z transformation of r, it is 1/[N-3]), is useful. It reduces bias in estimates of the mean magnitude of the effect. Compare effect A with B, or C with D. The effect that is estimated with a smaller sampling variance is less likely to cross the zero boundary such that the distribution of estimated absolute values is biased upwards. Consequently, if studies are weighting by their sample variance, the bias in the estimated mean is reduced. I do not think this insight is obvious from Morrissey’s review. 3 With greater variance in true effect sizes, there is a lower likelihood that the sampling variance will produce estimates either side of the zero boundary that inflate the estimated mean magnitude of an effect. That is, for distribution I, far fewer of the true effects are greater than or equal to C or D than is the case for distribution II. 4 The underlying statistics for commonly implemented meta-analyses assume that (i) the true distribution of effect sizes is symmetric and (ii) that the sampling variance is symmetric. Assumption (i) is false for absolute effect sizes when the distribution of true effects includes zero (compare, say, I and III). Although not illustrated, in I, the distribution of absolute effect sizes is an asymmetric folded normal distribution; for III, it is not (ignoring the very few true effects below zero). Obviously, as the situation moves from III towards I, the problem increases. Assumption (ii) is incorrect when the sampling variance includes values opposite in direction to the true effect (most likely for case A and least likely for case C). None of the above qualifiers negate Morrissey’s insight that transforming then analysing observed effect sizes inflates the estimated mean magnitude of an effect. The technical validity of Morrissey’s analysethen-transform mixed model approach to resolve the problem is beyond me, but it makes sense because it uses the appropriate variances. Ultimately, Fig. 1 simply illustrates that variances are being misspecified for meta-analysis of absolute values. In hindsight, the problem is fairly obvious, but in what other situations do problems arise? Morrissey suggests that ‘many