1
|
Rapid review: A review of methods and recommendations based on current evidence. J Evid Based Med 2024. [PMID: 38512942 DOI: 10.1111/jebm.12594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Accepted: 02/28/2024] [Indexed: 03/23/2024]
Abstract
Rapid review (RR) could accelerate the traditional systematic review (SR) process by simplifying or omitting steps using various shortcuts. With the increasing popularity of RR, numerous shortcuts had emerged, but there was no consensus on how to choose the most appropriate ones. This study conducted a literature search in PubMed from inception to December 21, 2023, using terms such as "rapid review" "rapid assessment" "rapid systematic review" and "rapid evaluation". We also scanned the reference lists and performed citation tracking of included impact studies to obtain more included studies. We conducted a narrative synthesis of all RR approaches, shortcuts and studies assessing their effectiveness at each stage of RRs. Based on the current evidence, we provided recommendations on utilizing certain shortcuts in RRs. Ultimately, we identified 185 studies focusing on summarizing RR approaches and shortcuts, or evaluating their impact. There was relatively sufficient evidence to support the use of the following shortcuts in RRs: limiting studies to those published in English-language; conducting abbreviated database searches (e.g., only searching PubMed/MEDLINE, Embase, and CENTRAL); omitting retrieval of grey literature; restricting the search timeframe to the recent 20 years for medical intervention and the recent 15 years for reviewing diagnostic test accuracy; conducting a single screening by an experienced screener. To some extent, the above shortcuts were also applicable to SRs. This study provided a reference for future RR researchers in selecting shortcuts, and it also presented a potential research topic for methodologists.
Collapse
|
2
|
Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials: a meta-epidemiological study. Cochrane Database Syst Rev 2024; 1:MR000034. [PMID: 38174786 PMCID: PMC10765475 DOI: 10.1002/14651858.mr000034.pub3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
BACKGROUND Researchers and decision-makers often use evidence from randomised controlled trials (RCTs) to determine the efficacy or effectiveness of a treatment or intervention. Studies with observational designs are often used to measure the effectiveness of an intervention in 'real world' scenarios. Numerous study designs and their modifications (including both randomised and observational designs) are used for comparative effectiveness research in an attempt to give an unbiased estimate of whether one treatment is more effective or safer than another for a particular population. An up-to-date systematic analysis is needed to identify differences in effect estimates from RCTs and observational studies. This updated review summarises the results of methodological reviews that compared the effect estimates of observational studies with RCTs from evidence syntheses that addressed the same health research question. OBJECTIVES To assess and compare synthesised effect estimates by study type, contrasting RCTs with observational studies. To explore factors that might explain differences in synthesised effect estimates from RCTs versus observational studies (e.g. heterogeneity, type of observational study design, type of intervention, and use of propensity score adjustment). To identify gaps in the existing research comparing effect estimates across different study types. SEARCH METHODS We searched MEDLINE, the Cochrane Database of Systematic Reviews, Web of Science databases, and Epistemonikos to May 2022. We checked references, conducted citation searches, and contacted review authors to identify additional reviews. SELECTION CRITERIA We included systematic methodological reviews that compared quantitative effect estimates measuring the efficacy or effectiveness of interventions tested in RCTs versus in observational studies. The included reviews compared RCTs to observational studies (including retrospective and prospective cohort, case-control and cross-sectional designs). Reviews were not eligible if they compared RCTs with studies that had used some form of concurrent allocation. DATA COLLECTION AND ANALYSIS Using results from observational studies as the reference group, we examined the relative summary effect estimates (risk ratios (RRs), odds ratios (ORs), hazard ratios (HRs), mean differences (MDs), and standardised mean differences (SMDs)) to evaluate whether there was a relatively larger or smaller effect in the ratio of odds ratios (ROR) or ratio of risk ratios (RRR), ratio of hazard ratios (RHR), and difference in (standardised) mean differences (D(S)MD). If an included review did not provide an estimate comparing results from RCTs with observational studies, we generated one by pooling the estimates for observational studies and RCTs, respectively. Across all reviews, we synthesised these ratios to produce a pooled ratio of ratios comparing effect estimates from RCTs with those from observational studies. In overviews of reviews, we estimated the ROR or RRR for each overview using observational studies as the reference category. We appraised the risk of bias in the included reviews (using nine criteria in total). To receive an overall low risk of bias rating, an included review needed: explicit criteria for study selection, a complete sample of studies, and to have controlled for study methodological differences and study heterogeneity. We assessed reviews/overviews not meeting these four criteria as having an overall high risk of bias. We assessed the certainty of the evidence, consisting of multiple evidence syntheses, with the GRADE approach. MAIN RESULTS We included 39 systematic reviews and eight overviews of reviews, for a total of 47. Thirty-four of these contributed data to our primary analysis. Based on the available data, we found that the reviews/overviews included 2869 RCTs involving 3,882,115 participants, and 3924 observational studies with 19,499,970 participants. We rated 11 reviews/overviews as having an overall low risk of bias, and 36 as having an unclear or high risk of bias. Our main concerns with the included reviews/overviews were that some did not assess the quality of their included studies, and some failed to account appropriately for differences between study designs - for example, they conducted aggregate analyses of all observational studies rather than separate analyses of cohort and case-control studies. When pooling RORs and RRRs, the ratio of ratios indicated no difference or a very small difference between the effect estimates from RCTs versus from observational studies (ratio of ratios 1.08, 95% confidence interval (CI) 1.01 to 1.15). We rated the certainty of the evidence as low. Twenty-three of 34 reviews reported effect estimates of RCTs and observational studies that were on average in agreement. In a number of subgroup analyses, small differences in the effect estimates were detected: - pharmaceutical interventions only (ratio of ratios 1.12, 95% CI 1.04 to 1.21); - RCTs and observational studies with substantial or high heterogeneity; that is, I2 ≥ 50% (ratio of ratios 1.11, 95% CI 1.04 to 1.18); - no use (ratio of ratios 1.07, 95% CI 1.03 to 1.11) or unclear use (ratio of ratios 1.13, 95% CI 1.03 to 1.25) of propensity score adjustment in observational studies; and - observational studies without further specification of the study design (ratio of ratios 1.06, 95% CI 0.96 to 1.18). We detected no clear difference in other subgroup analyses. AUTHORS' CONCLUSIONS We found no difference or a very small difference between effect estimates from RCTs and observational studies. These findings are largely consistent with findings from recently published research. Factors other than study design need to be considered when exploring reasons for a lack of agreement between results of RCTs and observational studies, such as differences in the population, intervention, comparator, and outcomes investigated in the respective studies. Our results underscore that it is important for review authors to consider not only study design, but the level of heterogeneity in meta-analyses of RCTs or observational studies. A better understanding is needed of how these factors might yield estimates reflective of true effectiveness.
Collapse
|
3
|
Preventing Childhood Obesity in Primary Schools: A Realist Review from UK Perspective. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:13395. [PMID: 34949004 PMCID: PMC8702173 DOI: 10.3390/ijerph182413395] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 12/10/2021] [Accepted: 12/14/2021] [Indexed: 12/02/2022]
Abstract
Childhood obesity is a global public health concern. While evidence from a recent comprehensive Cochrane review indicates school-based interventions can prevent obesity, we still do not know how or for whom these work best. We aimed to identify the contextual and mechanistic factors associated with obesity prevention interventions implementable in primary schools. A realist synthesis following the Realist And Meta-narrative Evidence Syntheses-Evolving Standards (RAMESES) guidance was with eligible studies from the 2019 Cochrane review on interventions in primary schools. The initial programme theory was developed through expert consensus and stakeholder input and refined with data from included studies to produce a final programme theory including all of the context-mechanism-outcome configurations. We included 24 studies (71 documents) in our synthesis. We found that baseline standardised body mass index (BMIz) affects intervention mechanisms variably as a contextual factor. Girls, older children and those with higher parental education consistently benefitted more from school-based interventions. The key mechanisms associated with beneficial effect were sufficient intervention dose, environmental modification and the intervention components working together as a whole. Education alone was not associated with favourable outcomes. Future interventions should go beyond education and incorporate a sufficient dose to trigger change in BMIz. Contextual factors deserve consideration when commissioning interventions to avoid widening health inequalities.
Collapse
|
4
|
Alternative causal inference methods in population health research: Evaluating tradeoffs and triangulating evidence. SSM Popul Health 2020; 10:100526. [PMID: 31890846 PMCID: PMC6926350 DOI: 10.1016/j.ssmph.2019.100526] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 11/04/2019] [Accepted: 12/01/2019] [Indexed: 11/23/2022] Open
Abstract
Population health researchers from different fields often address similar substantive questions but rely on different study designs, reflecting their home disciplines. This is especially true in studies involving causal inference, for which semantic and substantive differences inhibit interdisciplinary dialogue and collaboration. In this paper, we group nonrandomized study designs into two categories: those that use confounder-control (such as regression adjustment or propensity score matching) and those that rely on an instrument (such as instrumental variables, regression discontinuity, or differences-in-differences approaches). Using the Shadish, Cook, and Campbell framework for evaluating threats to validity, we contrast the assumptions, strengths, and limitations of these two approaches and illustrate differences with examples from the literature on education and health. Across disciplines, all methods to test a hypothesized causal relationship involve unverifiable assumptions, and rarely is there clear justification for exclusive reliance on one method. Each method entails trade-offs between statistical power, internal validity, measurement quality, and generalizability. The choice between confounder-control and instrument-based methods should be guided by these tradeoffs and consideration of the most important limitations of previous work in the area. Our goals are to foster common understanding of the methods available for causal inference in population health research and the tradeoffs between them; to encourage researchers to objectively evaluate what can be learned from methods outside one's home discipline; and to facilitate the selection of methods that best answer the investigator's scientific questions.
Collapse
|
5
|
Thoracoscopy vs. thoracotomy for the repair of esophageal atresia and tracheoesophageal fistula: a systematic review and meta-analysis. Pediatr Surg Int 2019; 35:1167-1184. [PMID: 31359222 DOI: 10.1007/s00383-019-04527-9] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/19/2019] [Indexed: 12/14/2022]
Abstract
Esophageal atresia (EA) and tracheoesophageal fistula (TEF) require emergency surgery in the neonatal period to prevent aspiration and respiratory compromise. Surgery was once exclusively performed via thoracotomy; however, there has been a push to correct this anomaly thoracoscopically. In this study, we compare intra- and post-operative outcomes of both techniques. A systematic review and meta-analyses was performed. A search strategy was developed in consultation with a librarian which was executed in CENTRAL, MEDLINE, and EMBASE from inception until January 2017. Two independent researchers screened eligible articles at title and abstract level. Full texts of potentially relevant articles were then screened again. Relevant data were extracted and analyzed. 48 articles were included. A meta-analysis found no statistically significant difference between thoracoscopy and thoracotomy in our primary outcome of total complication rate (OR 0.98, [0.29, 3.24], p = 0.97). Likewise, there were no statistically significant differences in anastomotic leak rates (OR 1.55, [0.72, 3.34], p = 0.26), formation of esophageal strictures following anastomoses that required one or more dilations (OR 1.92, [0.93, 3.98], p = 0.08), need for fundoplication following EA repair (OR 1.22, [0.39, 3.75], p = 0.73)-with the exception of operative time (MD 30.68, [4.35, 57.01], p = 0.02). Considering results from thoracoscopy alone, overall mortality in patients was low at 3.2% and in most cases was due to an associated anomaly rather than EA repair. Repair of EA/TEF is safe, with no statistically significant differences in morbidity when compared with an open approach.Level of evidence 3a systematic review of case-control studies.
Collapse
|
6
|
Reporting randomised trials of social and psychological interventions: the CONSORT-SPI 2018 Extension. Trials 2018; 19:407. [PMID: 30060754 PMCID: PMC6066921 DOI: 10.1186/s13063-018-2733-1] [Citation(s) in RCA: 132] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 06/08/2018] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Randomised controlled trials (RCTs) are used to evaluate social and psychological interventions and inform policy decisions about them. Accurate, complete, and transparent reports of social and psychological intervention RCTs are essential for understanding their design, conduct, results, and the implications of the findings. However, the reporting of RCTs of social and psychological interventions remains suboptimal. The CONSORT Statement has improved the reporting of RCTs in biomedicine. A similar high-quality guideline is needed for the behavioural and social sciences. Our objective was to develop an official extension of the Consolidated Standards of Reporting Trials 2010 Statement (CONSORT 2010) for reporting RCTs of social and psychological interventions: CONSORT-SPI 2018. METHODS We followed best practices in developing the reporting guideline extension. First, we conducted a systematic review of existing reporting guidelines. We then conducted an online Delphi process including 384 international participants. In March 2014, we held a 3-day consensus meeting of 31 experts to determine the content of a checklist specifically targeting social and psychological intervention RCTs. Experts discussed previous research and methodological issues of particular relevance to social and psychological intervention RCTs. They then voted on proposed modifications or extensions of items from CONSORT 2010. RESULTS The CONSORT-SPI 2018 checklist extends 9 of the 25 items from CONSORT 2010: background and objectives, trial design, participants, interventions, statistical methods, participant flow, baseline data, outcomes and estimation, and funding. In addition, participants added a new item related to stakeholder involvement, and they modified aspects of the flow diagram related to participant recruitment and retention. CONCLUSIONS Authors should use CONSORT-SPI 2018 to improve reporting of their social and psychological intervention RCTs. Journals should revise editorial policies and procedures to require use of reporting guidelines by authors and peer reviewers to produce manuscripts that allow readers to appraise study quality, evaluate the applicability of findings to their contexts, and replicate effective interventions.
Collapse
|
7
|
A scoping review and survey provides the rationale, perceptions, and preferences for the integration of randomized and nonrandomized studies in evidence syntheses and GRADE assessments. J Clin Epidemiol 2018; 98:33-40. [DOI: 10.1016/j.jclinepi.2018.01.010] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2017] [Revised: 01/04/2018] [Accepted: 01/31/2018] [Indexed: 12/17/2022]
|
8
|
Quasi-experimental study designs series–paper 12: strengthening global capacity for evidence synthesis of quasi-experimental health systems research. J Clin Epidemiol 2017; 89:98-105. [DOI: 10.1016/j.jclinepi.2016.03.034] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2014] [Revised: 12/01/2015] [Accepted: 03/08/2016] [Indexed: 12/13/2022]
|
9
|
The Effectiveness of Policy Interventions for School Bullying: A Systematic Review. JOURNAL OF THE SOCIETY FOR SOCIAL WORK AND RESEARCH 2017; 8:45-69. [PMID: 28344750 PMCID: PMC5363950 DOI: 10.1086/690565] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
OBJECTIVE Bullying threatens the mental and educational well-being of students. Although anti-bullying policies are prevalent, little is known about their effectiveness. This systematic review evaluates the methodological characteristics and summarizes substantive findings of studies examining the effectiveness of school bullying policies. METHOD Searches of 11 bibliographic databases yielded 489 studies completed since January 1, 1995. Following duplicate removal and double-independent screening based on a priori inclusion criteria, 21 studies were included for review. RESULTS Substantially more educators perceive anti-bullying policies to be effective rather than ineffective. Whereas several studies show that the presence or quality of policies is associated with lower rates of bullying among students, other studies found no such associations between policy presence or quality and reductions in bullying. Consistent across studies, this review found that schools with anti-bullying policies that enumerated protections based on sexual orientation and gender identity were associated with better protection of lesbian, gay, bisexual, transgender, and queer (LGBTQ) students. Specifically, LGBTQ students in schools with such policies reported less harassment and more frequent and effective intervention by school personnel. Findings are mixed regarding the relationship between having an anti-bullying policy and educators' responsiveness to general bullying. CONCLUSIONS Anti-bullying policies might be effective at reducing bullying if their content is based on evidence and sound theory and if they are implemented with a high level of fidelity. More research is needed to improve on limitations among extant studies.
Collapse
|
10
|
Adding non-randomised studies to a Cochrane review brings complementary information for healthcare stakeholders: an augmented systematic review and meta-analysis. BMC Health Serv Res 2016; 16:598. [PMID: 27769236 PMCID: PMC5073845 DOI: 10.1186/s12913-016-1816-5] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2016] [Accepted: 10/04/2016] [Indexed: 01/08/2023] Open
Abstract
Background To reduce the burden of asthma, chronic disease management (CDM) programmes have been widely implemented and evaluated. Reviews including randomised controlled trials (RCTs) suggest that CDM programmes for asthma are effective. Other study designs are however often used for pragmatic reasons, but excluded from these reviews because of their design. We aimed to examine what complementary information could be retrieved from the addition of non-randomised studies to the studies included in a published Cochrane review on asthma CDM programmes, for healthcare stakeholders involved in the development, implementation, conduct or long-term sustainability of such programmes. Methods Extending a previously published Cochrane review, we performed a systematic review (augmented review) including any type of study designs instead of only those initially accepted by Cochrane and the Effective Practice and Organization of Care Review group. After double data selection and extraction, we compared study and intervention characteristics, assessed methodological quality and ran meta-analyses, by study design. Results We added 37 studies to the 20 studies included in the Cochrane review. The applicability of results was increased because of the larger variety of settings and asthma population considered. Also, adding non-randomised studies provided new evidence of improvements associated with CDM intervention (i.e. healthcare utilisation, days off work, use of action plan). Finally, evidence of CDM effectiveness in the added studies was consistent with the Cochrane review in terms of direction of effects. Conclusions The evidence of this augmented review is applicable to a broader set of patients and settings than those in the original Cochrane review. It also strengthens the message that CDM programmes have a beneficial effect on quality of life and disease severity, meaningful outcomes for the everyday life of patients with asthma. Despite the moderate to low methodological quality of all studies included, calling for caution in results interpretation and improvements in CDM evaluation methods and reporting, the inclusion of a broader set of study designs in systematic reviews of complex interventions, such as chronic disease management, is likely to be of high value and interest to patients, policymakers and other healthcare stakeholders. Electronic supplementary material The online version of this article (doi:10.1186/s12913-016-1816-5) contains supplementary material, which is available to authorized users.
Collapse
|
11
|
Diverse criteria and methods are used to compare treatment effect estimates: a scoping review. J Clin Epidemiol 2016; 75:29-39. [PMID: 26891950 DOI: 10.1016/j.jclinepi.2016.02.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2015] [Revised: 01/16/2016] [Accepted: 02/04/2016] [Indexed: 10/22/2022]
Abstract
OBJECTIVES To determine what criteria researchers use to assess whether the estimates of effect of an intervention on a dichotomous outcome are different when obtained using different study designs. STUDY DESIGN AND SETTING Scoping review of the literature. We included studies of dichotomous outcomes in which authors compared the estimates of effects from different study designs. We performed searches in electronic databases and in the list of references of relevant studies. Two reviewers independently selected studies and abstracted data. We created a list of the criteria used to compare estimates of effects between study designs, described their main features, and classified them using a clinical perspective. RESULTS We included 26 studies, from which we identified 24 criteria. Most of the studies focused on comparing estimates from observational studies and randomized controlled trials (n = 19). The most common criteria aimed to determine whether there was a difference or not (n = 18), provided guidance for such a judgment (n = 16), and were based on the point estimates (n = 11). We judged 14 criteria to be appropriate and classified them as either statistically related or clinically related. CONCLUSION We found that diverse criteria are used to compare effect estimates between study designs. Familiarity with these would aid in the interpretation of results from different studies regarding the same question.
Collapse
|
12
|
What is the effect of reduced street lighting on crime and road traffic injuries at night? A mixed-methods study. PUBLIC HEALTH RESEARCH 2015. [DOI: 10.3310/phr03110] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
BackgroundSome local authorities have reduced street lighting at night to save energy, but little is known about impacts on public health or about public concerns about impacts on well-being.AimTo evaluate the effect of reduced street lighting on crime and road traffic injuries.DesignA mixed-methods study comprising a rapid appraisal, a controlled interrupted time series analysis and a cost–benefit analysis (CBA).SettingEngland and Wales.Target populationResidents and workers in eight case study areas; road traffic casualties and victims of crime.Interventions evaluatedSwitch-off (i.e. lights permanently turned off), part-night lighting (e.g. lights switched off between 12 a.m. and 6 a.m.), dimming lights and white lights/light-emitting diodes (LEDs).OutcomesPublic views about implications on well-being; road traffic injury data (STATS19:http://data.gov.uk/dataset/road-accidents-safety-data) obtained for the period 2000–13; crime data (Police.uk:data.police.uk) obtained for the period December 2010–December 2013. Detailed crime data were obtained from one police force for a methodological study of the spatial level at which Police.uk data are valid for analysis.Statistical methodsRoad traffic collisions were analysed at street segment level. Regression models were used to estimate changes in daytime and night-time collision rates associated with lighting interventions. The ratio of night-time and daytime changes was considered the best estimate of change in night-time collisions following each lighting intervention. Police.uk crime data were found to be reliable when analysed at middle super output area (MSOA) level. For crime, the analysis used the proportion of total km of road in each MSOA with each lighting intervention. Regression models controlled for yearly and monthly trends and were fitted in each geographical region and police force. Effect estimates were pooled in random-effects meta-analyses.ResultsPublic concerns centred on personal security, road safety, crime, fear of crime, sleep quality and being able to see the night sky. Street lighting reductions went largely unnoticed or had only marginal impacts on well-being, but for a minority of people switch-off and part-night lighting elicited concerns about fear of the dark, modernity and local governance. Street lighting data were obtained from 62 local authorities. There was no evidence that reduced street lighting was associated with road traffic collisions at night. There was significant heterogeneity in the estimated effects on crime at police force level. Overall, there was no evidence that reduced street lighting was associated with crime. There was weak evidence for a reduction in crime associated with dimming [rate ratio (RR) 0.84, 95% confidence interval (CI) 0.70 to 1.02] and white light (RR 0.89, 95% CI 0.77 to 1.03). The CBA suggests that part-night lighting may represent a net benefit to local authorities.LimitationsThe study did not account for the impacts of other safety/crime prevention initiatives (e.g. improved road markings; closed-circuit television), and so associations may be partly attributable to these initiatives. The CBA was unable to include potentially important impacts such as fear of crime and reduced mobility.ConclusionThis study found little evidence of harmful effects of switch-off, part-night lighting, dimming or changes to white light/LEDs on levels of road traffic collisions or crime in England and Wales. However, the public were also concerned about other health outcomes. Research is needed to understand how lighting affects opportunities for crime prevention and how these vary by context. Research is needed also on other public health impacts of light at night.FundingThe National Institute for Health Research Public Health Research programme.
Collapse
|
13
|
What is the ideal surgical approach for intra-abdominal testes? A systematic review. Pediatr Surg Int 2015; 31:327-38. [PMID: 25663531 DOI: 10.1007/s00383-015-3676-1] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 02/03/2015] [Indexed: 01/08/2023]
Abstract
There is controversy regarding the ideal surgical management of intra-abdominal testes (IAT) to preserve fertility; we conducted a systematic review to address this problem. We performed a comprehensive electronic search of CENTRAL, MEDLINE, EMBASE, and CINAHL from 2008 to September 2014 (the date range was limited due to an abundance of literature), as well as reference lists of included studies. Two researchers screened all studies for inclusion, and quality assessed each relevant study using AMSTAR for systematic reviews (SRs), Cochrane 'Risk of bias' tool for randomized controlled trials (RCTs), and MINORS for non-randomized studies. We identified two relevant SRs and 29 non-randomized studies. Due to the heterogeneity of the data, meta-analysis was not possible. Ultrasound and magnetic resonance imaging are insufficient for identification or localization of IAT; laparoscopic or surgical exploration is necessary. Primary orchiopexy is effective for low IAT, and Fowler-Stephens orchiopexy (FSO) is effective for high IAT. There is no clear benefit of one- vs. two-stage FSO, or of open vs. laparoscopic technique. Several alternative or modified techniques also show promise. RCTs are needed to confirm the validity of these findings, and to assess long-term outcomes.
Collapse
|
14
|
A reanalysis of cluster randomized trials showed interrupted time-series studies were valuable in health system evaluation. J Clin Epidemiol 2015; 68:324-33. [DOI: 10.1016/j.jclinepi.2014.10.003] [Citation(s) in RCA: 58] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Revised: 10/10/2014] [Accepted: 10/17/2014] [Indexed: 10/24/2022]
|
15
|
Abstract
Proponents of evidence-based medicine (EBM) have argued convincingly for applying this scientific method to medicine. However, the current methodological framework of the EBM movement has recently been called into question, especially in epidemiology and the philosophy of science. The debate has focused on whether the methodology of randomized controlled trials provides the best evidence available. This paper attempts to shift the focus of the debate by arguing that clinical reasoning involves a patchwork of evidential approaches and that the emphasis on evidence hierarchies of methodology fails to lend credence to the common practice of corroboration in medicine. I argue that the strength of evidence lies in the evidence itself, and not the methodology used to obtain that evidence. Ultimately, when it comes to evaluating the effectiveness of medical interventions, it is the evidence obtained from the methodology rather than the methodology that should establish the strength of the evidence.
Collapse
|
16
|
Reporting the characteristics of the policy context for population-level alcohol interventions: a proposed 'Transparent Reporting of Alcohol Intervention ContExts' (TRAICE) checklist. Drug Alcohol Rev 2014; 33:596-603. [PMID: 25271563 PMCID: PMC4278551 DOI: 10.1111/dar.12201] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2014] [Accepted: 08/10/2014] [Indexed: 11/26/2022]
Abstract
ISSUES Effectiveness of alcohol policy interventions varies across times and places. The circumstances under which effective polices can be successfully transferred between contexts are typically unexplored with little attention given to developing reporting requirements that would facilitate systematic investigation. APPROACH Using purposive sampling and expert elicitation methods, we identified context-related factors impacting on the effectiveness of population-level alcohol policies. We then drew on previous characterisations of alcohol policy contexts and methodological-reporting checklists to design a new checklist for reporting contextual information in evaluation studies. KEY FINDINGS Six context factor domains were identified: (i) baseline alcohol consumption, norms and harm rates; (ii) baseline affordability and availability; (iii) social, microeconomic and demographic contexts; (iv) macroeconomic context; (v) market context; and (vi) wider policy, political and media context. The checklist specifies information, typically available in national or international reports, to be reported in each domain. IMPLICATIONS The checklist can facilitate evidence synthesis by providing: (i) a mechanism for systematic and more consistent reporting of contextual data for meta-regression and realist evaluations; (ii) information for policy-makers on differences between their context and contexts of evaluations; and (iii) an evidence base for adjusting prospective policy simulation models to account for policy context. CONCLUSIONS Our proposed checklist provides a tool for gaining better understanding of the influence of policy context on intervention effectiveness. Further work is required to rationalise and aggregate checklists across interventions types to make such checklists practical for use by journals and to improve reporting of important qualitative contextual data.
Collapse
|
17
|
Abstract
The concept of meta-epidemiology has been introduced with considering the methodological limitations of systematic review for intervention trials. The paradigm of meta-epidemiology has shifted from a statistical method into a new methodology to close gaps between evidence and practice. Main interest of meta-epidemiology is to control potential biases in previous quantitative systematic reviews and draw appropriate evidences for establishing evidence-base guidelines. Nowadays, the network meta-epidemiology was suggested in order to overcome some limitations of meta-epidemiology. To activate meta-epidemiologic studies, implementation of tools for risk of bias and reporting guidelines such as the Consolidated Standards for Reporting Trials (CONSORT) should be done.
Collapse
|
18
|
Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev 2014; 2014:MR000034. [PMID: 24782322 PMCID: PMC8191367 DOI: 10.1002/14651858.mr000034.pub2] [Citation(s) in RCA: 229] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
BACKGROUND Researchers and organizations often use evidence from randomized controlled trials (RCTs) to determine the efficacy of a treatment or intervention under ideal conditions. Studies of observational designs are often used to measure the effectiveness of an intervention in 'real world' scenarios. Numerous study designs and modifications of existing designs, including both randomized and observational, are used for comparative effectiveness research in an attempt to give an unbiased estimate of whether one treatment is more effective or safer than another for a particular population.A systematic analysis of study design features, risk of bias, parameter interpretation, and effect size for all types of randomized and non-experimental observational studies is needed to identify specific differences in design types and potential biases. This review summarizes the results of methodological reviews that compare the outcomes of observational studies with randomized trials addressing the same question, as well as methodological reviews that compare the outcomes of different types of observational studies. OBJECTIVES To assess the impact of study design (including RCTs versus observational study designs) on the effect measures estimated.To explore methodological variables that might explain any differences identified.To identify gaps in the existing research comparing study designs. SEARCH METHODS We searched seven electronic databases, from January 1990 to December 2013.Along with MeSH terms and relevant keywords, we used the sensitivity-specificity balanced version of a validated strategy to identify reviews in PubMed, augmented with one term ("review" in article titles) so that it better targeted narrative reviews. No language restrictions were applied. SELECTION CRITERIA We examined systematic reviews that were designed as methodological reviews to compare quantitative effect size estimates measuring efficacy or effectiveness of interventions tested in trials with those tested in observational studies. Comparisons included RCTs versus observational studies (including retrospective cohorts, prospective cohorts, case-control designs, and cross-sectional designs). Reviews were not eligible if they compared randomized trials with other studies that had used some form of concurrent allocation. DATA COLLECTION AND ANALYSIS In general, outcome measures included relative risks or rate ratios (RR), odds ratios (OR), hazard ratios (HR). Using results from observational studies as the reference group, we examined the published estimates to see whether there was a relative larger or smaller effect in the ratio of odds ratios (ROR).Within each identified review, if an estimate comparing results from observational studies with RCTs was not provided, we pooled the estimates for observational studies and RCTs. Then, we estimated the ratio of ratios (risk ratio or odds ratio) for each identified review using observational studies as the reference category. Across all reviews, we synthesized these ratios to get a pooled ROR comparing results from RCTs with results from observational studies. MAIN RESULTS Our initial search yielded 4406 unique references. Fifteen reviews met our inclusion criteria; 14 of which were included in the quantitative analysis.The included reviews analyzed data from 1583 meta-analyses that covered 228 different medical conditions. The mean number of included studies per paper was 178 (range 19 to 530).Eleven (73%) reviews had low risk of bias for explicit criteria for study selection, nine (60%) were low risk of bias for investigators' agreement for study selection, five (33%) included a complete sample of studies, seven (47%) assessed the risk of bias of their included studies,Seven (47%) reviews controlled for methodological differences between studies,Eight (53%) reviews controlled for heterogeneity among studies, nine (60%) analyzed similar outcome measures, and four (27%) were judged to be at low risk of reporting bias.Our primary quantitative analysis, including 14 reviews, showed that the pooled ROR comparing effects from RCTs with effects from observational studies was 1.08 (95% confidence interval (CI) 0.96 to 1.22). Of 14 reviews included in this analysis, 11 (79%) found no significant difference between observational studies and RCTs. One review suggested observational studies had larger effects of interest, and two reviews suggested observational studies had smaller effects of interest.Similar to the effect across all included reviews, effects from reviews comparing RCTs with cohort studies had a pooled ROR of 1.04 (95% CI 0.89 to 1.21), with substantial heterogeneity (I(2) = 68%). Three reviews compared effects of RCTs and case-control designs (pooled ROR: 1.11 (95% CI 0.91 to 1.35)).No significant difference in point estimates across heterogeneity, pharmacological intervention, or propensity score adjustment subgroups were noted. No reviews had compared RCTs with observational studies that used two of the most common causal inference methods, instrumental variables and marginal structural models. AUTHORS' CONCLUSIONS Our results across all reviews (pooled ROR 1.08) are very similar to results reported by similarly conducted reviews. As such, we have reached similar conclusions; on average, there is little evidence for significant effect estimate differences between observational studies and RCTs, regardless of specific observational study design, heterogeneity, or inclusion of studies of pharmacological interventions. Factors other than study design per se need to be considered when exploring reasons for a lack of agreement between results of RCTs and observational studies. Our results underscore that it is important for review authors to consider not only study design, but the level of heterogeneity in meta-analyses of RCTs or observational studies. A better understanding of how these factors influence study effects might yield estimates reflective of true effectiveness.
Collapse
|
19
|
Community engagement to reduce inequalities in health: a systematic review, meta-analysis and economic analysis. PUBLIC HEALTH RESEARCH 2013. [DOI: 10.3310/phr01040] [Citation(s) in RCA: 156] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
BackgroundCommunity engagement has been advanced as a promising way of improving health and reducing health inequalities; however, the approach is not yet supported by a strong evidence base.ObjectivesTo undertake a multimethod systematic review which builds on the evidence that underpins the current UK guidance on community engagement; to identify theoretical models underpinning community engagement; to explore mechanisms and contexts through which communities are engaged; to identify community engagement approaches that are effective in reducing health inequalities, under what circumstances and for whom; and to determine the processes and costs associated with their implementation.Data sourcesDatabases including the Cochrane Database of Systematic Reviews (CDSR), The Campbell Library, the Database of Abstracts of Reviews of Effects (DARE), the Health Technology Assessment (HTA) database, the NHS Economic Evaluation Database (NHS EED) and EPPI-Centre’s Trials Register of Promoting Health Interventions (TRoPHI) and Database of Promoting Health Effectiveness Reviews (DoPHER) were searched from 1990 to August 2011 for systematic reviews and primary studies. Trials evaluating community engagement interventions reporting health outcomes were included.Review methodsStudy eligibility criteria: published after 1990; outcome, economic, or process evaluation; intervention relevant to community engagement; written in English; measured and reported health or community outcomes, or presents cost, resource, or implementation data characterises study populations or reports differential impacts in terms of social determinants of health; conducted in an Organisation for Economic Co-operation and Development (OECD) country. Study appraisal: risk of bias for outcome evaluations; assessment of validity and relevance for process evaluations; comparison against an economic evaluation checklist for economic evaluations. Synthesis methods: four synthesis approaches were adopted for the different evidence types: theoretical, quantitative, process, and economic evidence.ResultsThe theoretical synthesis identified key models of community engagement that are underpinned by different theories of changes. Results from 131 studies included in a meta-analysis indicate that there is solid evidence that community engagement interventions have a positive impact on health behaviours, health consequences, self-efficacy and perceived social support outcomes, across various conditions. There is insufficient evidence – particularly for long-term outcomes and indirect beneficiaries – to determine whether one particular model of community engagement is likely to be more effective than any other. There are also insufficient data to test the effects on health inequalities, although there is some evidence to suggest that interventions that improve social inequalities (as measured by social support) also improve health behaviours. There is weak evidence from the effectiveness and process evaluations that certain implementation factors may affect intervention success. From the economic analysis, there is weak but inconsistent evidence that community engagement interventions are cost-effective. By combining findings across the syntheses, we produced a new conceptual framework.LimitationsDifferences in the populations, intervention approaches and health outcomes made it difficult to pinpoint specific strategies for intervention effectiveness. The syntheses of process and economic evidence were limited by the small (generally not rigorous) evidence base.ConclusionsCommunity engagement interventions are effective across a wide range of contexts and using a variety of mechanisms. Public health initiatives should incorporate community engagement into intervention design. Evaluations should place greater emphasis on long-term outcomes, outcomes for indirect beneficiaries, process evaluation, and reporting costs and resources data. The theories of change identified and the newly developed conceptual framework are useful tools for researchers and practitioners. We identified trends in the evidence that could provide useful directions for future intervention design and evaluation.FundingThe National Institute for Health Research Public Health Research programme.
Collapse
|
20
|
Comparison of pooled risk estimates for adverse effects from different observational study designs: methodological overview. PLoS One 2013; 8:e71813. [PMID: 23977151 PMCID: PMC3748094 DOI: 10.1371/journal.pone.0071813] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2013] [Accepted: 07/03/2013] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND A diverse range of study designs (e.g. case-control or cohort) are used in the evaluation of adverse effects. We aimed to ascertain whether the risk estimates from meta-analyses of case-control studies differ from that of other study designs. METHODS Searches were carried out in 10 databases in addition to reference checking, contacting experts, and handsearching key journals and conference proceedings. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from case-control studies could be directly compared with the pooled estimate for the same adverse effect arising from other types of observational studies. RESULTS We included 82 meta-analyses. Pooled estimates of harm from the different study designs had 95% confidence intervals that overlapped in 78/82 instances (95%). Of the 23 cases of discrepant findings (significant harm identified in meta-analysis of one type of study design, but not with the other study design), 16 (70%) stemmed from significantly elevated pooled estimates from case-control studies. There was associated evidence of funnel plot asymmetry consistent with higher risk estimates from case-control studies. On average, cohort or cross-sectional studies yielded pooled odds ratios 0.94 (95% CI 0.88-1.00) times lower than that from case-control studies. INTERPRETATION Empirical evidence from this overview indicates that meta-analysis of case-control studies tend to give slightly higher estimates of harm as compared to meta-analyses of other observational studies. However it is impossible to rule out potential confounding from differences in drug dose, duration and populations when comparing between study designs.
Collapse
|
21
|
Nonrandomized studies are not always found even when selection criteria for health systems intervention reviews include them: a methodological study. J Clin Epidemiol 2013; 66:367-70. [DOI: 10.1016/j.jclinepi.2012.11.009] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2011] [Revised: 11/01/2012] [Accepted: 11/15/2012] [Indexed: 10/27/2022]
|
22
|
Are more observational studies being included in Cochrane Reviews? BMC Res Notes 2012; 5:570. [PMID: 23069208 PMCID: PMC3503546 DOI: 10.1186/1756-0500-5-570] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2012] [Accepted: 09/22/2012] [Indexed: 11/23/2022] Open
Abstract
Background Increasing the scope of an evidence based approach to areas outside healthcare has renewed the importance of a long-standing discussion on randomised versus observational study designs in evaluating the effectiveness of interventions. We investigate statistically if an increasing recognition of the role of certain nonrandomised studies to support or generalize the results of randomised controlled trials has had an impact on the actual inclusion criteria applied in Cochrane reviews. Methods We conduct an on-line search of the Cochrane Database of Systematic Reviews (CDSR) and divide all Cochrane reviews according to their design inclusion criterion: (A) RCTs only or (B) RCTs and (some subset of) observational studies. We test statistically whether a shift in the proportion of category B reviews has occurred by comparing reviews published before 2008 with reviews published during 2008/09. Results We find that the proportion of Cochrane reviews choosing a broader inclusion criterion has increased, although by less than two percentage points. The shift is not statistically significant (P = 0.08). Conclusions There is currently not sufficient data to support a hypothesis of a significant shift in favour of including observational studies, neither at the aggregate level nor at the level of individual Review Groups within the Cochrane Collaboration.
Collapse
|
23
|
Abstract
This paper argues that the current proliferation of types of systematic reviews creates challenges for the terminology for describing such reviews. Terminology is necessary for planning, describing, appraising, and using reviews, building infrastructure to enable the conduct and use of reviews, and for further developing review methodology. There is insufficient consensus on terminology for a typology of reviews to be produced and any such attempt is likely to be limited by the overlapping nature of the dimensions along which reviews vary. It is therefore proposed that the most useful strategy for the field is to develop terminology for the main dimensions of variation. Three such main dimensions are proposed: (1) aims and approaches (including what the review is aiming to achieve, the theoretical and ideological assumptions, and the use of theory and logics of aggregation and configuration in synthesis); (2) structure and components (including the number and type of mapping and synthesis components and how they relate); and (3) breadth and depth and the extent of 'work done' in addressing a research issue (including the breadth of review questions, the detail with which they are addressed, and the amount the review progresses a research agenda). This then provides an overarching strategy to encompass more detailed descriptions of methodology and may lead in time to a more overarching system of terminology for systematic reviews.
Collapse
|
24
|
Study-design selection criteria in systematic reviews of effectiveness of health systems interventions and reforms: A meta-review. Health Policy 2012; 104:206-14. [DOI: 10.1016/j.healthpol.2011.12.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2011] [Revised: 11/28/2011] [Accepted: 12/15/2011] [Indexed: 11/16/2022]
|
25
|
Meta-analyses of adverse effects data derived from randomised controlled trials as compared to observational studies: methodological overview. PLoS Med 2011; 8:e1001026. [PMID: 21559325 PMCID: PMC3086872 DOI: 10.1371/journal.pmed.1001026] [Citation(s) in RCA: 194] [Impact Index Per Article: 14.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/06/2010] [Accepted: 03/15/2011] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND There is considerable debate as to the relative merits of using randomised controlled trial (RCT) data as opposed to observational data in systematic reviews of adverse effects. This meta-analysis of meta-analyses aimed to assess the level of agreement or disagreement in the estimates of harm derived from meta-analysis of RCTs as compared to meta-analysis of observational studies. METHODS AND FINDINGS Searches were carried out in ten databases in addition to reference checking, contacting experts, citation searches, and hand-searching key journals, conference proceedings, and Web sites. Studies were included where a pooled relative measure of an adverse effect (odds ratio or risk ratio) from RCTs could be directly compared, using the ratio of odds ratios, with the pooled estimate for the same adverse effect arising from observational studies. Nineteen studies, yielding 58 meta-analyses, were identified for inclusion. The pooled ratio of odds ratios of RCTs compared to observational studies was estimated to be 1.03 (95% confidence interval 0.93-1.15). There was less discrepancy with larger studies. The symmetric funnel plot suggests that there is no consistent difference between risk estimates from meta-analysis of RCT data and those from meta-analysis of observational studies. In almost all instances, the estimates of harm from meta-analyses of the different study designs had 95% confidence intervals that overlapped (54/58, 93%). In terms of statistical significance, in nearly two-thirds (37/58, 64%), the results agreed (both studies showing a significant increase or significant decrease or both showing no significant difference). In only one meta-analysis about one adverse effect was there opposing statistical significance. CONCLUSIONS Empirical evidence from this overview indicates that there is no difference on average in the risk estimate of adverse effects of an intervention derived from meta-analyses of RCTs and meta-analyses of observational studies. This suggests that systematic reviews of adverse effects should not be restricted to specific study types. Please see later in the article for the Editors' Summary.
Collapse
|