Premium
Systematic reviews: let's keep them trustworthy
Author(s) -
GarciaDoval I.,
Zuuren E.J.,
BathHextall F.,
Ingram J.R.
Publication year - 2017
Publication title -
british journal of dermatology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 2.304
H-Index - 179
eISSN - 1365-2133
pISSN - 0007-0963
DOI - 10.1111/bjd.15826
Subject(s) - trustworthiness , medicine , computer science , internet privacy
The advent of systematic reviews is relatively new and represents an important tool to summarize the best evidence for studying the effects of healthcare interventions. Without systematic reviews, the effectiveness of therapies (or lack of benefit) can remain unrecognized for many years. The Cochrane logo (www.cochranelibrary.com) provides a good example: it shows the forest plot of a systematic review describing how corticosteroids given to women who are about to give birth prematurely can save the life of the newborn child, an important summary of the evidence available at the time. Systematic reviews as a publication type have increased over the last decade. They were initially concerned with therapeutic interventions, but there are now many more types, from ‘umbrella’ reviews (reviews of reviews) to reviews of observational studies and case reports. The methods of the review will depend on the question being asked. Systematic reviews carry an aura of infallibility but are highly dependent on the methods used and the quality of information that they summarize. A well-performed systematic review of high-quality randomized controlled trials (RCTs) is usually assigned to the top of the evidence pyramid for assessing the effects of interventions. However, a poorly performed systematic review that includes only a subset of all relevant evidence with little attempt to rate the quality of the included studies can produce false conclusions. The review’s position in the evidence pyramid is anchored by the quality of its included studies. For example, a systematic review of case reports is likely to be misleading due to publication bias affecting the included studies, in which reports showing benefit of a new intervention are more likely to be published than those showing no benefit. Cochrane reviews are generally more methodologically rigorous than non-Cochrane reviews in dermatology, but are limited to studies of interventions and diagnostic test accuracy. These are the types of reviews with the most established methods. The Joanna Briggs Institute has also developed methodology for qualitative systematic reviews. However, reviews of other types of questions are less well developed and authors have less guidance to ensure a high-quality review. The BJD aims to publish only excellent systematic reviews but of any type. An advantage of systematic reviews over narrative reviews is that their methods should be clearly described and reproducible. This is the objective of reporting guidelines such as PRISMA or MOOSE, ensuring complete description of the methods so that the reader can judge the quality of the review. In a similar way to registering or publishing the protocol for an RCT, the planned methodology for a systematic review should be registered beforehand on the International Prospective Register of Systematic Reviews (PROSPERO) so that reviewers and readers can judge whether the authors did what they said they would do. Cochrane review protocols are also published before the review starts. Several tools exist for undertaking critical appraisal and methodological quality assessment of systematic reviews. Although none has become universally accepted, the AMSTAR tool is probably the most commonly used quality assessment tool for systematic reviews of RCTs. A tool for observational studies is being developed. As in any study, the main concerns that can lead to false results in systematic reviews are chance, confounding and bias. These can arise in the original studies and in the process of producing the systematic review and can be amplified in the review if steps are not taken to mitigate them. Taking into account the effect of chance is probably the least problematic factor. It is achieved using statistics and tools such as meta-analysis, provided the included studies are sufficiently similar to combine in such a way – which is always a matter of judgement. It is important to ensure that meta-analysis does not hide, but describes and helps explain, heterogeneity of results. Confounding is not a big problem in reviews of RCTs because randomization, if done properly, decreases confounding. However, for reviews of observational studies, controlling for confounding is difficult. Important confounders should be predefined and taken into account in the review, which can be difficult to achieve if they have not been measured in all studies. Even after diligent adjustment, residual confounding due to partial control for known confounders or the existence of unknown confounders can cause the results of a review to deviate from the truth. Bias is perhaps the most important issue to assess. Bias can be present in the original papers or may be introduced in the process of creating the systematic review. There are tools such as the Cochrane risk of bias tool, which can be used to evaluate risk of bias in the original papers (designed mostly for use with RCTs), and the ROBINS-I tool for nonrandomized studies of interventions. Risk-of-bias figures give a useful general overview of potential bias within all the included studies, although it can be difficult to use them to determine the influence of bias on a particular result. The Grades of Recommendation, Assessment, Development and Evaluation (GRADE) approach is a standard method to make recommendations on the basis of evidence, and includes tools to assess the quality of evidence per outcome. ConQual and CERQual have the same purpose of assessing the confidence that we can place in qualitative reviews findings. Assessment of risk of bias