A summary of ‘Epidemiology and Reporting Characteristics of Systematic Reviews of Biomedical Research: A Cross-Sectional Study’ Page M. et al 2016
By Tess Moore
Systematic reviews (SRs) were meant to save us from the overload of medical literature. This overload is considerable and is increasing. In 2015 MEDLINE indexed 806,000 citations to biomedical research – up 4% on the year before1. Alongside this increase there has been a surge in the number of SRs from none in 19872 up to 7 per day (2,500 per year) in 20043; 11 a day in 20104; 22 per day, or more than 8,000 a year, in 20145. Clearly, we have come a long way in the history of evidence synthesis – integrating evidence into understandable and manageable bites6.
But what are they like, these systematic reviews? How reliable are they? Have the methods and reporting improved? And what, as review authors and researchers, do we need to think about for the future and for our own reviews when we publish them?
MESS member Dr Matt Page came to talk to us about ‘the mess that is the systematic review literature’.
What did Matt and his colleagues do?
Matt worked with 11 colleagues from Australia, Brazil, Canada, Spain and the UK to update a study that looked at the properties of contemporary, published SRs in 2004 and update it for SRs published in 20145.
Why did they do this?
Well a lot has happened since the first teams took a look at SRs in 2004. The new reporting guidelines for SRs, PRISMA, were published7, as were the MOOSE guidelines for reviews of observational studies8 and the Institute of Medicine in the US has newly published standards for reporting of SRs9. Plus many journals are now more familiar with publishing SRs and journal editors are more aware of their importance and potential use. So it was timely to compare what they found ten years ago in 2004 to what is happening now.
What did they find?
They found that MEDLINE had indexed 682 SRs in one month (February 2014). This equates to 8,000 per year – three times as many as 2004. Matt’s team set out explicit methods concerning selection and eligibility of reviews to their work as described in beautiful detail in their paper.
The review has six data rich tables describing the parameters of all the reviews by review type and I urge you to take a look. I can highlight the key things they found:
Using a subset cohort of 300 SRs (the same number examined in 2004) they found that 45/300 (15%) were Cochrane reviews of interventions; 119/300 (40%) were non Cochrane intervention reviews; 74/300 (25%) were epidemiological type studies and 33/300 (11%) were diagnosis or prognosis reviews. Ten percent [29/300] were classified as other (these were reviews of education or of properties of outcome measure scales etc).
How had the reviews done?
Clear reporting and use of appropriate methodology allows us readers to more easily assess the validity of review findings.
Most Cochrane SRs of therapeutic interventions used a protocol and they were all available for everyone to read (98% [44/45]). This happy picture was not reflected in either non-Cochrane therapeutic intervention SRs, where only 22% [26/119] mentioned a protocol with just 4% [5/119] available to read. For DTA (diagnostic test accuracy), epidemiology and other SRs the picture was worse with only 5% [7/136] reporting a protocol. Across all SRs 70% [206/296] had assessed risk of bias but only 16% [31/189] of those actually applied the risk of bias to their analysis. Only 7% [21/300] of studies looked for unpublished data and 47% [141/300] described an assessment of publication bias. Page et al5 go on to say that at least (often a lot higher) one third did not describe some basic SR methods:
years of search,
a full Boolean search strategy for at least one electronic bibliographic database,
methods for data extraction or risk of bias assessment,
a primary outcome,
study limitations in the abstract or
Apart from the protocols published by the Cochrane reviews, all of this goes to paint a pretty disappointing picture.
Given this lack of reporting of some of the most basic aspects of systematic reviewing methods reporting we have to ask “Had the review authors used reporting guidelines?”
Matt’s paper reports that only one third (29% [87/300]) referred to reporting guidelines. And worryingly 52 % of these (45/87) misinterpreted the reporting guidelines and thought these were synonymous with SR conduct strategies such as the Cochrane Handbook.
How did 2014 compare to 2004?
One of the most worrying findings was that the number of non-Cochrane SRs that mention they used a protocol (12-13%) is about the same in 2014 as it was in 2004. This is dismally low. All SRs need a protocol – for the same reason as trials need a protocol – to avoid bias. This is a sad indictment of the teaching of SR methods, or might be a case of poor reporting. When Matt et al assessed the effect of a study mentioning PRISMA reporting guidelines they found that those that did use PRISMA were almost more likely to mention a protocol than those who didn’t (risk ratio = 1.83 95% CI 0.94 to 3.58) – the lower 95% CI just clips the line – so it looks like some work is needed in both teaching methods of SR and how to report SRs.
Matt and his team found that the proportion of types of reviews being done have changed. Proportionately there are fewer therapeutic clinical questions being answered and more epidemiological questions e.g. prevalence of a condition (13% to 25%), the proportion of SRs that were Cochrane also decreased (from 20% to 15%) showing that SRs are being accepted and published more widely (i.e. outside of Cochrane) than they were in 2004. Matt’s team showed that compared to 2004 SRs of 2014 were more likely to identify they were an SR (or meta analysis) in the title, which makes retrieving SRs in searches more likely. They were also more likely to: report eligibility criteria about language; report the flow of studies through the review process (PRISMA flow chart); provide a complete list of excluded studies and reasons; perform a meta-analysis; and assess publication bias. Some things that hadn’t improved included; assessment of harms; assessment of statistical heterogeneity; specification of a primary outcome; assessment of risk of bias; report of full Boolean search strategy; reporting of both start and end years of search; and the eligibility criteria concerning publication status.
How did the use of PRISMA guidelines by review authors affect the reporting of reviews?
Matt and his team found SRs who mentioned PRISMA were more likely to have reported on a range of key SR methods including the methods used for screening and data extraction and risk of bias assessment as well as using more thorough searching methods and use of meta-analysis.
So what does it all mean?
We have come a long way from the status in 1987 when most medical reviews did not describe any methods for how they had brought their articles together2. However, sadly, against a backdrop of increasing numbers of SRs and, rather worryingly, a massive increase in narrative reviews4, Matt’s work highlights that the conduct (i.e. methods used) and the reporting of SRs are not that great overall. If reviews are not done adherent to good methodology then results can potentially be misleading. If reviews are not reported in detail and with clarity then it is not possible to assess those methods and judge the validity of the results. Matt and his colleagues conclude that ‘strategies are needed to increase the value of SRs to patients, health care practitioners and policy makers5.
What strategies are there? Well we could think again about reporting guidelines. Matt’s team showed that PRISMA has improved reporting in their sample of SRs. But there are already 319 covering all types of medical research, listed on the EQUATOR website11. And the PRISMA stable is developing a string of extensions since its first publication 2009. Since 2015 there are three extensions PRISMA-P12 for protocols, PRISMA-IPD13 for individual patient data meta-analyses and PRISMA-NMA14 for network meta-analyses. And in process there are PRISMA guidelines for reporting of SRs of children, PRISMA-C and PRISMA-DTA for SRs of diagnostic test accuracy. Reporting guidelines and use of checklists are often requested by journal editors. But is there something more dynamic to help authors?
For increasing the visibility of protocols we can register SRs on PROSPERO (an international prospective register of SRs) which allows public view, and it is also possible to publish SR protocols on the Systematic Reviews journal published by BioMed Central.
Simpler, more straightforward assistance is available as suggested by Matt Page et al, and the editors of PLOS15. They suggest software to assist SR authors when drafting their paper. They give as an example an online tool, COBWEB16, developed by Barnes et al for trialists that prompts them, when drafting their RCT, to comply with CONSORT reporting guidelines that improved clarity of reporting. There is a new journal, Research Integrity and Peer Review, dedicated to improving publication of research and they might provide, in time, some evidence of how to improve reporting. And there is a new wizard to help authors and journal editors to both find and use reporting guidelines. It’s called PENELOPE research, and several BMC journals are already signed up to a trial of its use which you can read about on the EQUATOR Blog.
In short the take home message is to all of us that prepare SRs, please first conduct the review according to stated methodological guidelines. To prepare a protocol and most importantly to use a checklist for our FIRST draft manuscript to remind us to be clear and write down what we did. It is important to use these reporting guidelines – whether or not our favoured journal asks for them. To editors of journals we would ask that you please provide us with a sufficient word count to describe our work, in both the paper (we are fine with web appendices) and especially the abstract, and then to enforce the use of a checklist at submission stage.
We would like to thank Dr Matt Page for this fascinating presentation and I urge everyone to go read their paper as it so packed full of information and data. AND data that are SUPER useful for research grant writing. Also to read a fascinating interview he did with Cochrane Senior Editor, Toby Lasserson, of the Cochrane Editorial.
We will be back with more of a MESS (Methods in Evidence Synthesis Salon) in September with a talk on ROBINS-I and risk of bias for non-randomised studies.
- US National library of medicine. Key MEDLINE indicators (accessed 07/07/16)
- Mulrow CD. The medical review article: state of the science. Annals of internal medicine 1987;106:485-8
- Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and Reporting Characteristics of Systematic Reviews. PLoS Med 2007;4:e78
- Bastian H, Glasziou P, Chalmers I. Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up? PLoS Med 2010;7:e1000326
- Page MJ, Shamseer L, Altman DG, Tetzlaff J, Sampson M, Tricco AC, et al. Epidemiology and Reporting Characteristics of Systematic Reviews of Biomedical Research: A Cross-Sectional Study. PLoS Med 2016;13:e1002028
- Clarke M. History of evidence synthesis to assess treatment effects: Personal reflections on something that is very much alive. Journal of the Royal Society of Medicine 2016;109:154-63
- Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Journal of clinical epidemiology 2009;62:1006-12
- Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. Jama 2000;283:2008-12
- Institute of Medicine. Finding what works in health care: standards for systematic reviews. National Academies Press; 2011
- Chandler J, Churchill R, Higgins J, Lasserson T, D T. Methodological standards for the reporting of new Cochrane intervention reviews, version 1.1. 2012. 2012
- The Equator Network. 2016
- Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Systematic Reviews 2015;4:1-9
- Stewart LA, Clarke M, Rovers M, et al. Preferred reporting items for a systematic review and meta-analysis of individual participant data: The prisma-ipd statement. Jama 2015;313:1657-65
- Hutton B, Salanti G, Caldwell DM, Chaimani A, Schmid CH, Cameron C, et al. The PRISMA Extension Statement for Reporting of Systematic Reviews Incorporating Network Meta-analyses of Health Care Interventions: Checklist and ExplanationsPRISMA Extension for Network Meta-analysis. Annals of internal medicine 2015;162:777-84
- The Plos Medicine Editors. From Checklists to Tools: Lowering the Barrier to Better Research Reporting. PLoS Med 2015;12:e1001910
- Barnes C, Boutron I, Giraudeau B, Porcher R, Altman DG, Ravaud P. Impact of an online writing aid tool for writing a randomized trial report: the COBWEB (Consort-based WEB tool) randomized controlled trial. BMC Medicine 2015;13:1-10
- Shanahan D, Marshall D. It’s a kind of magic: how to improve adherence to reporting guidelines. In: EQUATOR Blog; 2016