z-logo
Premium
Clarification of the basis for the selection of requirements for publication in the British Journal of Pharmacology
Author(s) -
Curtis Michael J,
Ashton John C,
Moon Lawrence D F,
Ahluwalia Amrita
Publication year - 2018
Publication title -
british journal of pharmacology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.432
H-Index - 211
eISSN - 1476-5381
pISSN - 0007-1188
DOI - 10.1111/bph.14443
Subject(s) - selection (genetic algorithm) , pharmacology , information retrieval , psychology , computer science , medicine , artificial intelligence
In 2015 and 2018, the British Journal of Pharmacology (BJP) published guidelines on experimental design and analysis (Curtis et al., 2015, 2018). The intention was to improve the credibility of papers published in BJP by the simplest means possible. It is all very well for a journal to elaborate a framework of best practice, with lengthy explanations for each issue considered, but if authors, reviewers and editors fail to adopt the framework because it is too complex or nuanced, then we fail as a journal. Consequently, unlike most other journals (Williams et al., 2018), BJP has opted for firm rules about a small number of issues, rather than generalized and lengthy ‘best practice advice’. We focused on inconsistent reporting of P values (e.g. P < 0.05, P = exact value, P < different values), persistent and unjustified use of n = 3 (or fewer), grossly unequal group sizes and an absence of randomization and blinding (each of which typically occurs together in many papers) that are particular problems in our sector and contribute to the failed replication that is undermining the credibility of preclinical research. We received two letters that criticize some of our guidance and have written an itemized reply below. First, we make a general point. Most of the BJP guidelines are ‘conventions’, that is, pragmatic solutions to practical challenges. This is particularly relevant to BJP’s requirements for group size selection. Setting n = 5 as the minimum allowable for comparing groups by statistical analysis (the ‘n = 5 rule’) is clearly a convention. We are not claiming n = 5 is sufficient and necessary for all studies. In some studies, group sizesmuch larger than n = 5 are necessary to reduce the risk of false findings, whereas in other studies, where the control outcome has been established repeatedly in previous published work, group sizes of fewer than n = 5 may be sufficient. In the main, BJP publishes papers on new drugs, or using new transgenic animals, or evaluating variables that have not been evaluated previously, often a combination of all three. Novelty is the key.When work is novel, it is extraordinarily rare for an author to include in their Methods section a clear statement that the data are known to be drawn from a normally distributed population (the necessary prerequisite for the type of parametric analysis typically undertaken) or that they have undertaken sample size calculations a priori that indicate that n = X would be adequate for their design. Consequently, it seems that deciding on an appropriate group size is done by after-the-fact power analysis using the data generated by a study to justify the group size used in the study (as opposed to a priori power analysis) or by ‘informed judgement’ (guesswork). Moreover, ‘group sizes as small as possible’ is normally the guiding principle. The resultant problem is that studies are often favourably treated by peer review if sufficiently novel, with no questioning of group size selection. This is not a problem that can be ignored. Most statistical software programs allow tests that run on small n (even n = 2), but the reliability of resultant P values diminishes as group sizes become smaller (Halsey et al., 2015), and low power is widespread and leads to higher rates of false findings (Button et al., 2013). Because, for novel work typical of that published in BJP, a priori power calculations are normally impossible, our n = 5 rule is therefore a convention that precludes default selection of smaller group sizes without adequate validation and is designed to facilitate confidence in study outcome. However, there is a recent emergence of preclinical research where safeguards have indeed been put in place before the experiments were undertaken, with pre-registration of study design limiting unreported post hoc manipulation of analytical methods. Emmrich et al. (2018) is a good example of a pre-registered study that was modified transparently after post publication peer review of the design and proposed method of analysis. As a consequence, the Editors of BJP will consider findings of n < 5 where the designs and analyses for a study have been approved a priori and published in a British Journal of Pharmacology British Journal of Pharmacology (2018) 175 3633–3635 3633

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here