1
|
van Aert RCM, Wicherts JM. Correcting for outcome reporting bias in a meta-analysis: A meta-regression approach. Behav Res Methods 2024; 56:1994-2012. [PMID: 37540470 PMCID: PMC10991008 DOI: 10.3758/s13428-023-02132-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/13/2023] [Indexed: 08/05/2023]
Abstract
Outcome reporting bias (ORB) refers to the biasing effect caused by researchers selectively reporting outcomes within a study based on their statistical significance. ORB leads to inflated effect size estimates in meta-analysis if only the outcome with the largest effect size is reported due to ORB. We propose a new method (CORB) to correct for ORB that includes an estimate of the variability of the outcomes' effect size as a moderator in a meta-regression model. An estimate of the variability of the outcomes' effect size can be computed by assuming a correlation among the outcomes. Results of a Monte-Carlo simulation study showed that the effect size in meta-analyses may be severely overestimated without correcting for ORB. Estimates of CORB are close to the true effect size when overestimation caused by ORB is the largest. Applying the method to a meta-analysis on the effect of playing violent video games on aggression showed that the effect size estimate decreased when correcting for ORB. We recommend to routinely apply methods to correct for ORB in any meta-analysis. We provide annotated R code and functions to help researchers apply the CORB method.
Collapse
Affiliation(s)
- Robbie C M van Aert
- Department of Methodology and Statistics, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands.
| | - Jelte M Wicherts
- Department of Methodology and Statistics, Tilburg University, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands
| |
Collapse
|
2
|
Wiedemann PM, Lohmann M, Böl GF, Freudenstein F. Eliminating the effects of reporting bias on risk perception. Sci Total Environ 2023; 874:162304. [PMID: 36805069 DOI: 10.1016/j.scitotenv.2023.162304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Revised: 02/13/2023] [Accepted: 02/13/2023] [Indexed: 06/18/2023]
Abstract
Taking the public discourse on health risks due to aluminum in antiperspirants as an example, we conducted a randomized controlled study with repeated measurements to research how selective reporting of risk information affects risk perception and trust in risk information. First, the study varied the information scope that the experimental subjects received (selective vs. complete information). Selective information highlighted that a health risk is given. Considering the full range of studies, complete information is indicated the opposite. A second variation referred to the facticity of the hazardous agent mentioned in the risk information (a reference to either an actual or fictitious agent). Moreover, the selectively informed subjects received the complete information after the effects of the selective information were measured. Four risk perceptions constructs were chosen as dependent variables, differing on two dimensions (affective vs. cognitive and personal risk vs. risk for others). In addition, subjects´ trust in the given risk information was measured. The study reveals that presenting selective information amplifies risk perceptions. The effect was observed, irrespective of whether the hazardous agent mentioned in the risk information was actual or fictitious. When subjects who first received the selective information obtained the complete information, indicating no elevated risk, risk perceptions decreased. However, the analysis also indicates that corrective information (indicating no risk) is less trusted than selective information that points to health risks. Furthermore, proper toxicological understanding, i.e., taking into account the dose-response relationship, supports the effect of corrective information on risk perceptions.
Collapse
Affiliation(s)
- P M Wiedemann
- Department of Epidemiology and Preventive Medicine, School of Public Health and Preventive Medicine, Faculty of Medicine Nursing and Health Sciences, Monash University, Melbourne, VIC, Australia
| | - M Lohmann
- Department of Risk Communication, German Federal Institute for Risk Assessment, Berlin, Germany
| | - G-F Böl
- Department of Risk Communication, German Federal Institute for Risk Assessment, Berlin, Germany
| | - F Freudenstein
- Department of Epidemiology and Preventive Medicine, School of Public Health and Preventive Medicine, Faculty of Medicine Nursing and Health Sciences, Monash University, Melbourne, VIC, Australia; Faculty of Social Work, Health and Nursing, Ravensburg-Weingarten University of Applied Sciences, Weingarten, Germany.
| |
Collapse
|
3
|
Littell JH, Gorman DM. The Campbell Collaboration's systematic review of school-based anti-bullying interventions does not meet mandatory methodological standards. Syst Rev 2022; 11:145. [PMID: 35851418 PMCID: PMC9290269 DOI: 10.1186/s13643-022-01998-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 05/28/2022] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND Many published reviews do not meet the widely accepted PRISMA standards for systematic reviews and meta-analysis. Campbell Collaboration and Cochrane reviews are expected to meet even more rigorous standards, but their adherence to these standards is uneven. For example, a newly updated Campbell systematic review of school-based anti-bullying interventions does not appear to meet many of the Campbell Collaboration's mandatory methodological standards. ISSUES In this commentary, we document methodological problems in the Campbell Collaboration's new school-based anti-bullying interventions review, including (1) unexplained deviations from the protocol; (2) inadequate documentation of search strategies; (3) inconsistent reports on the number of included studies; (4) undocumented risk of bias ratings; (5) assessments of selective outcome reporting bias that are not transparent, not replicable, and appear to systematically underestimate risk of bias; (6) unreliable assessments of risk of publication bias; (7) use of a composite scale that conflates distinct risks of bias; and (8) failure to consider issues related to the strength of the evidence and risks of bias in interpreting results and drawing conclusions. Readers who are unaware of these problems may place more confidence in this review than is warranted. Campbell Collaboration editors declined to publish our comments and declined to issue a public statement of concern about this review. CONCLUSIONS Systematic reviews are expected to use transparent methods and follow relevant methodological standards. Readers should be concerned when these expectations are not met, because transparency and rigor enhance the trustworthiness of results and conclusions. In the tradition of Donald T. Campbell, there is need for more public debate about the methods and conclusions of systematic reviews, and greater clarity regarding applications of (and adherence to) published standards for systematic reviews.
Collapse
Affiliation(s)
- Julia H Littell
- Graduate School of Social Work and Social Research, Bryn Mawr College, Bryn Mawr, PA, USA.
| | - Dennis M Gorman
- Department of Epidemiology & Biostatistics, School of Public Health, Texas A&M University, College Station, TX, USA
| |
Collapse
|
4
|
Vrljičak Davidović N, Komić L, Mešin I, Kotarac M, Okmažić D, Franić T. Registry versus publication: discrepancy of primary outcomes and possible outcome reporting bias in child and adolescent mental health. Eur Child Adolesc Psychiatry 2022; 31:757-769. [PMID: 33459886 DOI: 10.1007/s00787-020-01710-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/09/2020] [Accepted: 12/19/2020] [Indexed: 10/22/2022]
Abstract
Outcome reporting bias is one of the fundamental forms of publication bias. It implies publishing only outcomes that have positive results. The aim of this observational study was to explore primary outcome discrepancies between registry of clinical trials and their corresponding publications, since these can indicate outcome reporting bias in child mental health. Data were extracted from completed interventional clinical trials from ClinicalTrial.gov registry and its Archive site. Trials were registered under "Behaviours and Mental Disorders" category, and conducted on underage participants (0-17 years). Their primary outcomes were compared to those published in publication which had a corresponding NCT number stated in the text. Sixteen percent of trials did not have the minimum information on primary outcome stated in the registry-neither the measure used nor the measurement time points; 38.9% of trials had the minimum information stated to describe primary outcome, while only 3.3% of trials had all the necessary elements stated in the registry. Most of the publication in our sample had positive results (66.4%). Half of the trials registered before completion had non-matching primary outcomes in the registry and publication; 85.4% of trials with non-matching outcomes indicated possible outcome reporting bias for some of the primary outcome. Middle-sized trials and industry-funded trials were related with higher quality of primary outcome registration. Industry funding was related with positive findings in publication. Non-industry funding proved to be the only significant predictor of discrepancy between registered and published primary outcomes, and possible outcome reporting bias. Journal impact factor was not related with any of the outcome measures. The main limitation of the study is that it primarily offers an insight into discrepancy of registered and published outcomes. The methodology does not imply an access to results of unpublished outcomes - therefore, it was not possible to determine the presence of the bias with sufficient certainty in large number of trials. Further research should be done with improved methodology and additional data.
Collapse
Affiliation(s)
| | - Luka Komić
- School of Medicine, University of Split, Šoltanska 2, 21000, Split, Croatia
| | - Ivana Mešin
- School of Medicine, University of Split, Šoltanska 2, 21000, Split, Croatia
| | - Mihaela Kotarac
- School of Medicine, University of Split, Šoltanska 2, 21000, Split, Croatia
| | - Donald Okmažić
- School of Medicine, University of Split, Šoltanska 2, 21000, Split, Croatia
| | - Tomislav Franić
- School of Medicine, University of Split, Šoltanska 2, 21000, Split, Croatia.,Department of Psychiatry, Clinical Hospital Centre Split, Spinčićeva 1, 21000, Split, Croatia
| |
Collapse
|
5
|
Jackson JL, Balk EM, Hyun N, Kuriyama A. Approaches to Assessing and Adjusting for Selective Outcome Reporting in Meta-analysis. J Gen Intern Med 2022; 37:1247-53. [PMID: 34669145 DOI: 10.1007/s11606-021-07135-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/02/2021] [Indexed: 10/20/2022]
Abstract
BACKGROUND Selective or non-reporting of study outcomes results in outcome reporting bias. OBJECTIVE We sought to develop and assess tools for detecting and adjusting for outcome reporting bias. DESIGN Using data from a previously published systematic review, we abstracted whether outcomes were reported as collected, whether outcomes were statistically significant, and whether statistically significant outcomes were more likely to be reported. We proposed and tested a model to adjust for unreported outcomes and compared our model to three other methods (Copas, Frosi, trim and fill). Our approach assumes that unreported outcomes had a null intervention effect with variance imputed based on the published outcomes. We further compared our approach to these models using simulation, and by varying levels of missing data and study sizes. RESULTS There were 286 outcomes reported as collected from 47 included trials: 142 (48%) had the data provided and 144 (52%) did not. Reported outcomes were more likely to be statistically significant than those collected but for which data were unreported and for which non-significance was reported (RR, 2.4; 95% CI, 1.9 to 3.0). Our model and the Copas model provided similar decreases in the pooled effect sizes in both the meta-analytic data and simulation studies. The Frosi and trim and fill methods performed poorly. LIMITATIONS Single intervention of a single disease with only randomized controlled trials; approach may overestimate outcome reporting bias impact. CONCLUSION There was evidence of selective outcome reporting. Statistically significant outcomes were more likely to be published than non-significant ones. Our simple approach provided a quick estimate of the impact of unreported outcomes on the estimated effect. This approach could be used as a quick assessment of the potential impact of unreported outcomes.
Collapse
|
6
|
Freudenstein F, Croft RJ, Loughran SP, Zeleke BM, Wiedemann PM. Effects of selective outcome reporting on risk perception. Environ Res 2021; 196:110821. [PMID: 33548295 DOI: 10.1016/j.envres.2021.110821] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Revised: 01/26/2021] [Accepted: 01/27/2021] [Indexed: 06/12/2023]
Abstract
The current study aimed to investigate how selective reporting of study results indicating increased health effects will influence its receiver's risk perception. Using the example of the Interphone Study from 2010 on mobile phone usage and cancer, an online experiment was conducted separating respondents into two groups. One group of subjects was informed selectively about a relationship between heavy mobile phone use and an elevated risk of glioma (brain cancer) only. The other group of subjects was informed about the full results of the analyses of glioma risk by cumulative call time, which suggests that other than for the heavy users, there were no statistically significant elevated risks related to mobile phone use. The results showed that selective reporting of risk information increased risk perception when compared to receiving the full information. Additionally, the selectively informed subjects revealed a stronger tendency towards overgeneralization of the 'elevated brain cancer risk' to all mobile phone users, although this did not extend to an overgeneralization to other electromagnetic field sources or differences in the perception of a usage time dependency for possible health risks. These results indicate that reporting of full results is an important factor in effective risk communication.
Collapse
Affiliation(s)
- F Freudenstein
- Australian Centre for Electromagnetic Bioeffects Research, Illawarra Health and Medical Research Institute, University of Wollongong, Wollongong, NSW, Australia; Department of Epidemiology and Preventive Medicine, School of Public Health and Preventive Medicine, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, VIC, Australia; Centre for Population Health Research on Electromagnetic Energy, Monash University, VIC, Australia; Department of Risk Communication, German Federal Institute for Risk Assessment, Berlin, Germany.
| | - R J Croft
- Australian Centre for Electromagnetic Bioeffects Research, Illawarra Health and Medical Research Institute, University of Wollongong, Wollongong, NSW, Australia; Centre for Population Health Research on Electromagnetic Energy, Monash University, VIC, Australia; School of Psychology, Faculty of the Arts, Social Sciences & Humanities, University of Wollongong, Wollongong, NSW, Australia.
| | - S P Loughran
- Australian Centre for Electromagnetic Bioeffects Research, Illawarra Health and Medical Research Institute, University of Wollongong, Wollongong, NSW, Australia; Centre for Population Health Research on Electromagnetic Energy, Monash University, VIC, Australia; School of Psychology, Faculty of the Arts, Social Sciences & Humanities, University of Wollongong, Wollongong, NSW, Australia.
| | - B M Zeleke
- Department of Epidemiology and Preventive Medicine, School of Public Health and Preventive Medicine, Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, VIC, Australia; Centre for Population Health Research on Electromagnetic Energy, Monash University, VIC, Australia.
| | - P M Wiedemann
- Australian Centre for Electromagnetic Bioeffects Research, Illawarra Health and Medical Research Institute, University of Wollongong, Wollongong, NSW, Australia; Centre for Population Health Research on Electromagnetic Energy, Monash University, VIC, Australia; School of Psychology, Faculty of the Arts, Social Sciences & Humanities, University of Wollongong, Wollongong, NSW, Australia.
| |
Collapse
|
7
|
Ayorinde AA, Williams I, Mannion R, Song F, Skrybant M, Lilford RJ, Chen YF. Publication and related biases in health services research: a systematic review of empirical evidence. BMC Med Res Methodol 2020; 20:137. [PMID: 32487022 PMCID: PMC7268600 DOI: 10.1186/s12874-020-01010-1] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 05/07/2020] [Indexed: 02/08/2023] Open
Abstract
Background Publication and related biases (including publication bias, time-lag bias, outcome reporting bias and p-hacking) have been well documented in clinical research, but relatively little is known about their presence and extent in health services research (HSR). This paper aims to systematically review evidence concerning publication and related bias in quantitative HSR. Methods Databases including MEDLINE, EMBASE, HMIC, CINAHL, Web of Science, Health Systems Evidence, Cochrane EPOC Review Group and several websites were searched to July 2018. Information was obtained from: (1) Methodological studies that set out to investigate publication and related biases in HSR; (2) Systematic reviews of HSR topics which examined such biases as part of the review process. Relevant information was extracted from included studies by one reviewer and checked by another. Studies were appraised according to commonly accepted scientific principles due to lack of suitable checklists. Data were synthesised narratively. Results After screening 6155 citations, four methodological studies investigating publication bias in HSR and 184 systematic reviews of HSR topics (including three comparing published with unpublished evidence) were examined. Evidence suggestive of publication bias was reported in some of the methodological studies, but evidence presented was very weak, limited in both quality and scope. Reliable data on outcome reporting bias and p-hacking were scant. HSR systematic reviews in which published literature was compared with unpublished evidence found significant differences in the estimated intervention effects or association in some but not all cases. Conclusions Methodological research on publication and related biases in HSR is sparse. Evidence from available literature suggests that such biases may exist in HSR but their scale and impact are difficult to estimate for various reasons discussed in this paper. Systematic review registration PROSPERO 2016 CRD42016052333.
Collapse
Affiliation(s)
- Abimbola A Ayorinde
- Warwick Centre for Applied Health Research & Delivery, Division of Health Sciences, Warwick Medical School, University of Warwick, Coventry, UK
| | - Iestyn Williams
- Health Services Management Centre, School of Social Policy, University of Birmingham, Birmingham, UK
| | - Russell Mannion
- Health Services Management Centre, School of Social Policy, University of Birmingham, Birmingham, UK
| | - Fujian Song
- Norwich Medical School, University of East Anglia, Norwich, UK
| | - Magdalena Skrybant
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Richard J Lilford
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Yen-Fu Chen
- Warwick Centre for Applied Health Research & Delivery, Division of Health Sciences, Warwick Medical School, University of Warwick, Coventry, UK.
| |
Collapse
|
8
|
Dos Santos MBF, Agostini BA, Bassani R, Pereira GKR, Sarkis-Onofre R. Protocol registration improves reporting quality of systematic reviews in dentistry. BMC Med Res Methodol 2020; 20:57. [PMID: 32160871 PMCID: PMC7065343 DOI: 10.1186/s12874-020-00939-7] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 02/24/2020] [Indexed: 12/13/2022] Open
Abstract
BACKGROUND The aims of this study were to assess whether the previous registration of a systematic review (SR) is associated with the improvement of the quality of the report of SRs and whether SR registration reduced outcome reporting bias. METHODS We performed a search in PubMed for SRs in dentistry indexed in 2017. Data related to SR registration and reporting characteristics were extracted. We analyzed if the reporting of 21 characteristics of included SRs was associated with the prospective registration of protocols or reporting of a previously established protocol. The association between prospective registering of protocols, reporting of funding and number of included studies versus outcome reporting bias was tested via multivariable logistic regression. RESULTS We included 495 SRs. One hundred and 62 (32.7%) SRs reported registering the SR protocol or working from a previously established protocol. Thirteen reporting characteristics were described statistically significant in SRs registered versus SRs that were not. Publication bias assessment and Report the number of participants showed the highest effects favoring the register (RR 1.59, CI 95% 1.19-2.12; RR 1.58, CI 95% 1.31-1.92 respectively). Moreover, Registration was not significantly linked with the articles' reporting statistical significance (OR 0.96, CI 95% 0.49-1.90). CONCLUSION There is a positive influence of previously registering a protocol in the final report quality of SRs in dentistry. However, we did not observe an association between protocol registration and reduction in outcome reporting bias.
Collapse
Affiliation(s)
| | - Bernardo Antônio Agostini
- Graduate Program in Dentistry, Meridional Faculty/IMED, 304 Senador Pinheiro Machado Street, Passo Fundo, 99070-220, Brazil
| | - Rafaela Bassani
- Graduate Program in Dentistry, Meridional Faculty/IMED, 304 Senador Pinheiro Machado Street, Passo Fundo, 99070-220, Brazil
| | - Gabriel Kalil Rocha Pereira
- Graduate Program in Dentistry, Meridional Faculty/IMED, 304 Senador Pinheiro Machado Street, Passo Fundo, 99070-220, Brazil
| | - Rafael Sarkis-Onofre
- Graduate Program in Dentistry, Meridional Faculty/IMED, 304 Senador Pinheiro Machado Street, Passo Fundo, 99070-220, Brazil. .,The Bias, Reporting, Implementation, Guidance, ETHics, IntEgrity of and Reproducibility in Research (BRIGHTER) Meta Research Group, Porto Alegre, Brazil.
| |
Collapse
|
9
|
Parsons R, Golder S, Watt I. More than one-third of systematic reviews did not fully report the adverse events outcome. J Clin Epidemiol 2018; 108:95-101. [PMID: 30553831 DOI: 10.1016/j.jclinepi.2018.12.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 11/19/2018] [Accepted: 12/07/2018] [Indexed: 01/10/2023]
Abstract
OBJECTIVES The aim of the study was to assess the risk for adverse events reporting bias in systematic reviews of health care interventions registered to PROSPERO. STUDY DESIGN AND SETTING This study was a retrospective cohort study. Systematic review protocols in PROSPERO were screened and included if they focused on a health care intervention and listed an adverse event as either a primary or secondary outcome. The included systematic reviews were assessed to determine the completeness of reporting for the adverse event outcomes. Any discrepancies in reporting between protocol and review were recorded. RESULTS Of 1,376 protocols for systematic reviews sifted, only 524 (38%) listed adverse events outcomes. One hundred eighty-six protocols were published in 2017 and 2018, of which 146 were included in our analysis. Among the included systematic reviews, 65% (95/146) fully reported the adverse event outcomes as intended by the protocol, 8% (12/146) entirely excluded the adverse event outcome, and the remaining 27% (39/146) either partially reported or changed the adverse event outcomes. CONCLUSION Sixty-two percent of reviews did not mention adverse events in their protocol, and 35% of PROSPERO-registered systematic reviews had discrepant outcome reporting between the protocol and publication. The findings suggest a need for the encouraged use of harms reporting guidelines and further research into adverse events reporting bias.
Collapse
Affiliation(s)
- Rachael Parsons
- Department of Health Sciences, University of York, Heslington, York, YO10 5DD UK
| | - Su Golder
- Department of Health Sciences, University of York, Heslington, York, YO10 5DD UK.
| | - Ian Watt
- Department of Health Sciences, University of York, Heslington, York, YO10 5DD UK; The Hull York Medical School, University of York, Heslington, York, YO10 5DD UK
| |
Collapse
|
10
|
Kahan BC, Jairath V. Outcome pre-specification requires sufficient detail to guard against outcome switching in clinical trials: a case study. Trials 2018; 19:265. [PMID: 29720248 PMCID: PMC5932799 DOI: 10.1186/s13063-018-2654-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 04/13/2018] [Indexed: 11/13/2022] Open
Abstract
Background Pre-specification of outcomes is an important tool to guard against outcome switching in clinical trials. However, if the outcome is not sufficiently clearly defined, then different definitions could be applied and analysed, with only the most favourable result reported. Methods In order to assess the impact that differing outcome definitions could have on treatment effect estimates, we re-analysed data from TRIGGER, a cluster randomised trial comparing two red blood cell transfusion strategies for patients with acute upper gastrointestinal bleeding. We varied several aspects of the definition of further bleeding: (1) the criteria for what constitutes a further bleeding episode; (2) how further bleeding is assessed; and (3) the time-point at which further bleeding is measured. Results There were marked discrepancies in the estimated odds ratios (OR) (range 0.23–0.94) and corresponding P values (range < 0.001–0.89) between different outcome definitions. At the extremes, differing outcome definitions led to markedly different conclusions; one definition led to very little evidence of a treatment effect (OR = 0.94, 95% confidence interval [CI] = 0.37–2.40, P = 0.89), while another led to very strong evidence of a treatment effect (OR = 0.23, 95% CI = 0.11–0.50, P < 0.001). Conclusions Outcomes should be pre-specified in sufficient detail to avoid differing definitions being analysed and only the most favourable result being reported. Trial registration Clinical Trials.gov, NCT02105532. Registered on 7 April 2014.
Collapse
Affiliation(s)
- Brennan C Kahan
- Pragmatic Clinical Trials Unit, Queen Mary University of London, 58 Turner St, London, E1 2AB, UK.
| | - Vipul Jairath
- Department of Medicine, Division of Gastroenterology, University Hospital, London, ON, Canada.,Department of Epidemiology and Biostatistics, Western University, London, ON, Canada
| |
Collapse
|
11
|
Oliveira CB, Elkins MR, Lemes ÍR, de Oliveira Silva D, Briani RV, Monteiro HL, Azevedo FMD, Pinto RZ. A low proportion of systematic reviews in physical therapy are registered: a survey of 150 published systematic reviews. Braz J Phys Ther 2018; 22:177-183. [PMID: 29128407 PMCID: PMC5993937 DOI: 10.1016/j.bjpt.2017.09.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Revised: 08/22/2017] [Accepted: 09/21/2017] [Indexed: 12/27/2022] Open
Abstract
BACKGROUND Systematic reviews provide the best evidence about the effectiveness of healthcare interventions. Although systematic reviews are conducted with explicit and transparent methods, discrepancies might occur between the protocol and the publication. OBJECTIVES To estimate the proportion of systematic reviews of physical therapy interventions that are registered, the methodological quality of (un)registered systematic reviews and the prevalence of outcome reporting bias in registered systematic reviews. METHODS A random sample of 150 systematic reviews published in 2015 indexed on the PEDro database. We included systematic reviews written in English, Italian, Portuguese and Spanish. A checklist for assessing the methodological quality of systematic reviews tool was used. Relative risk was calculated to explore the association between meta-analysis results and the changes in the outcomes. RESULTS Twenty-nine (19%) systematic reviews were registered. Funding and publication in a journal with an impact factor higher than 5.0 were associated with registration. Registered systematic reviews demonstrated significantly higher methodological quality (median=8) than unregistered systematic reviews (median=5). Nine (31%) registered systematic reviews demonstrated discrepancies between protocol and publication with no evidence that such discrepancies were applied to favor the statistical significance of the intervention (RR=1.16; 95% CI: 0.63-2.12). CONCLUSION A low proportion of systematic reviews in the physical therapy field are registered. The registered systematic reviews showed high methodological quality without evidence of outcome reporting bias. Further strategies should be implemented to encourage registration.
Collapse
Affiliation(s)
- Crystian B Oliveira
- Departmento de Fisioterapia, Faculdade de Ciências e Tecnologia, Universidade Estadual Paulista (UNESP), Presidente Prudente, SP, Brazil
| | - Mark R Elkins
- Sydney Medical School, University of Sydney, Sydney, NSW, Australia; Centre for Evidence-Based Physiotherapy, Musculoskeletal Health Sydney, School of Public Health, University of Sydney, Sydney, NSW, Australia
| | - Ítalo Ribeiro Lemes
- Departmento de Fisioterapia, Faculdade de Ciências e Tecnologia, Universidade Estadual Paulista (UNESP), Presidente Prudente, SP, Brazil
| | - Danilo de Oliveira Silva
- La Trobe Sports and Exercise Medicine Research Centre, School of Allied Health, La Trobe University, Bundoora, Victoria, Australia
| | - Ronaldo V Briani
- Departmento de Fisioterapia, Faculdade de Ciências e Tecnologia, Universidade Estadual Paulista (UNESP), Presidente Prudente, SP, Brazil
| | - Henrique Luiz Monteiro
- Departamento de Educação Física, Faculdade de Ciências, Universidade Estadual Paulista (UNESP), Bauru, SP, Brazil
| | - Fábio Mícolis de Azevedo
- Departmento de Fisioterapia, Faculdade de Ciências e Tecnologia, Universidade Estadual Paulista (UNESP), Presidente Prudente, SP, Brazil
| | - Rafael Zambelli Pinto
- Departmento de Fisioterapia, Faculdade de Ciências e Tecnologia, Universidade Estadual Paulista (UNESP), Presidente Prudente, SP, Brazil; Departamento de Fisioterapia, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG, Brazil.
| |
Collapse
|
12
|
Swaen GMH, Langendam M, Weyler J, Burger H, Siesling S, Atsma WJ, Bouter L. Responsible Epidemiologic Research Practice: a guideline developed by a working group of the Netherlands Epidemiological Society. J Clin Epidemiol 2018; 100:111-119. [PMID: 29432862 DOI: 10.1016/j.jclinepi.2018.02.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2017] [Revised: 12/22/2017] [Accepted: 02/01/2018] [Indexed: 11/19/2022]
Abstract
OBJECTIVES To develop a guideline on Responsible Epidemiologic Research Practice that will increase value and transparency, increase the accountability of the epidemiologists, and reduce research waste. SETTING A working group of the Netherland Epidemiological Society was given the task of developing a guideline that would meet these objectives. Several publications about the need to prevent Detrimental Research Practices triggered this work. Among these were a series in the Lancet on research waste and a subsequent series on transparency in the Journal of Clinical Epidemiology. The reputation and trust in epidemiologic research is still high, and the Netherlands Epidemiological Society wishes to keep it that way. The guideline deals with how epidemiologic research should be conducted, archived, and disclosed. It does not deal with the more technical aspects, such as required sample size, choice of study design, and so forth. The guideline describes each step in the process of conducting an epidemiologic study, from the first idea to the ultimate publication and beyond. METHODS The working group reviewed the literature on responsible research conduct, including the various existing codes of conduct. It applied the general principles from these codes to the elements of an epidemiologic study and formulated specific recommendations for each of these. Next step was to draft the guideline. Preceding the 2016 annual national epidemiology conference in Wageningen, a preconference was organized to discuss the draft guideline and to assess support. Support was clearly present, and the provided recommendations were incorporated into the draft guideline. In March 2017, a draft version of the guideline was sent to all 1,100 members of the society with the request to review and provide comments. All received responses were positive, and some minor additions were made. The Responsible Epidemiologic Research Practice guideline has now been approved by the board of the Netherlands Epidemiological Society. CONCLUSION With the Responsible Epidemiologic Research Practice guideline, we hope to contribute to better research practices in epidemiology but perhaps also in adjacent disciplines.
Collapse
Affiliation(s)
- Gerard M H Swaen
- Department of Complex Genetics, Caphri Research Institute, Maastricht University, P.O. Box 616 6200 MD, Maastricht, The Netherlands.
| | - Miranda Langendam
- Department of Clinical Epidemiology, Biostatistics and Bioinformatics, Academic Medical Center, Amsterdam University, Amsterdam, The Netherlands
| | - Joost Weyler
- Department of Epidemiology and Social Medicine, University of Antwerp, Antwerp, Belgium
| | - Huibert Burger
- Department of General Practice, University Medical Center Groningen, Groningen, The Netherlands
| | - Sabine Siesling
- Department of Health Technology and Services Research, Twente University, Hengelo, The Netherlands
| | | | - Lex Bouter
- Department of Epidemiology and Biostatistics, VU University Medical Center, Amsterdam, The Netherlands
| |
Collapse
|
13
|
Copas J, Marson A, Williamson P, Kirkham J. Model-based sensitivity analysis for outcome reporting bias in the meta analysis of benefit and harm outcomes. Stat Methods Med Res 2017; 28:889-903. [PMID: 29134855 DOI: 10.1177/0962280217738546] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Outcome reporting bias occurs when outcomes in research studies are selectively reported, the selection being influenced by the study results. For benefit outcomes, we have shown how risk assessments using the Outcome Reporting Bias in Trials risk classification scale can be used to calculate bias-adjusted treatment effect estimates. This paper presents a new and simpler version of the benefits method, and shows how it can be extended to cover the partial reporting and non-reporting of harm outcomes. Our motivating example is a Cochrane systematic review of 12 studies of Topiramate add-on therapy for drug-resistant partial epilepsy. Bias adjustments for partially reported or unreported outcomes suggest that the review has overestimated the benefits and underestimated the harms of the test treatment.
Collapse
Affiliation(s)
- John Copas
- 1 Department of Statistics, University of Warwick, Coventry, UK
| | - Anthony Marson
- 2 Department of Molecular and Clinical Pharmacology, University of Liverpool, Liverpool, UK
| | - Paula Williamson
- 3 Department of Biostatistics, University of Liverpool, Liverpool, UK
| | - Jamie Kirkham
- 3 Department of Biostatistics, University of Liverpool, Liverpool, UK
| |
Collapse
|
14
|
Duffy JMN, Hirsch M, Gale C, Pealing L, Kawsar A, Showell M, Williamson PR, Khan KS, Ziebland S, McManus RJ. A systematic review of primary outcomes and outcome measure reporting in randomized trials evaluating treatments for pre-eclampsia. Int J Gynaecol Obstet 2017; 139:262-267. [PMID: 28803445 DOI: 10.1002/ijgo.12298] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2017] [Revised: 06/13/2017] [Accepted: 08/10/2017] [Indexed: 12/12/2022]
Abstract
BACKGROUND An evaluation of outcome reporting is required to develop a core outcome set. OBJECTIVES To assess primary outcomes and outcome measure reporting in pre-eclampsia trials. SEARCH STRATEGY Five online databases were searched from inception to January 2016 using terms including "preeclampsia" and "randomized controlled trial". SELECTION CRITERIA Randomized controlled trials evaluating treatments for pre-eclampsia published in any language were included. DATA COLLECTION AND ANALYSIS Primary outcomes and data on outcome measure reporting were systematically extracted and categorized. MAIN RESULTS Overall, 79 randomized trials including data from 31 615 women were included. Of those, 38 (48%) reported 35 different primary outcomes; 28 were maternal outcomes and seven were fetal/neonatal outcomes. Three randomized trials reported composite outcomes, incorporating between six and nine outcome components. The method of definition or measurement was infrequently or poorly reported. Even when outcomes were consistent across trials, different methods of definition or measurement were frequently described. CONCLUSIONS In randomized trials evaluating interventions for pre-eclampsia, critical information related to the primary outcome, including definition and measurement, is regularly omitted. Developing a core outcome set for pre-eclampsia trials would help to inform primary outcome selection and outcome measure reporting.
Collapse
Affiliation(s)
- James M N Duffy
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK.,Balliol College, University of Oxford, Oxford, UK
| | - Martin Hirsch
- Women's Health Research Unit, Queen Mary University of London, London, UK.,Royal Free London NHS Trust, London, UK
| | - Chris Gale
- Neonatal Medicine, Faculty of Medicine, Imperial College London, London, UK
| | - Louise Pealing
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | | | - Marian Showell
- Cochrane Gynaecology and Fertility Group, University of Auckland, Auckland, New Zealand
| | - Paula R Williamson
- MRC North West Hub for Trials Methodology Research, Institute of Translational Medicine, University of Liverpool, Liverpool, UK
| | - Khalid S Khan
- Women's Health Research Unit, Queen Mary University of London, London, UK
| | - Sue Ziebland
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | - Richard J McManus
- Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
| | | |
Collapse
|
15
|
van den Bogert CA, Souverein PC, Brekelmans CTM, Janssen SWJ, Koëter GH, Leufkens HGM, Bouter LM. Primary endpoint discrepancies were found in one in ten clinical drug trials. Results of an inception cohort study. J Clin Epidemiol 2017; 89:199-208. [PMID: 28535887 DOI: 10.1016/j.jclinepi.2017.05.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2016] [Revised: 04/26/2017] [Accepted: 05/15/2017] [Indexed: 01/02/2023]
Abstract
OBJECTIVE To identify the occurrence and determinants of protocol-publication discrepancies in clinical drug trials. STUDY DESIGN AND SETTING All published clinical drug trials reviewed by the Dutch institutional review boards in 2007 were analyzed. Discrepancies between trial protocols and publications were measured among key reporting aspects. We evaluated the association of trial characteristics with discrepancies in primary endpoints by calculating the risk ratio (RR) and 95% confidence interval (CI). RESULTS Of the 334 published trials, 32 (9.6%) had a protocol/publication discrepancy in the primary endpoints. Among the subgroup of randomized controlled trials (RCTs; N = 204), 12 (5.9%) had a discrepancy in the primary endpoint. Investigator-initiated trials with and without industry (co-) funding were associated with having discrepancies in the primary endpoints compared with industry-sponsored trials (RR 3.7; 95% CI 1.4-9.9 and RR 4.4; 95% CI 2.0-9.5, respectively). Furthermore, other than phase 1-4 trials (vs. phase 1; RR 4.6; 95% CI 1.1-19.3), multicenter trials were also conducted outside the European Union (vs. single center; RR 0.2; 95% CI 0.1-0.6), not prospectively registered trials (RR 3.3; 95% CI 1.5-7.5), non-RCTs (vs. superiority RCT; RR 2.4; 95% CI 1.2-4.8) and, among the RCTs, crossover compared with a parallel group design (RR 3.7; 95% CI 1.1-12.3) were significantly associated with having discrepancies in the primary endpoints. CONCLUSIONS Improvement in completeness of reporting is still needed, especially among investigator-initiated trials and non-RCTs. To eliminate undisclosed discrepancies, trial protocols should be available in the public domain at the same time when the trial is published.
Collapse
|
16
|
Dal-Ré R, Bobes J, Cuijpers P. Why prudence is needed when interpreting articles reporting clinical trial results in mental health. Trials 2017; 18:143. [PMID: 28351418 DOI: 10.1186/s13063-017-1899-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Accepted: 03/13/2017] [Indexed: 12/04/2022] Open
Abstract
Background Clinical trial results’ reliability is impacted by reporting bias. This is primarily manifested as publication bias and outcome reporting bias. Mental health trials’ specific features Mental health trials are prone to two methodological deficiencies: (1) using small numbers of participants that facilitates false positive findings and exaggerated size effects, and (2) the obligatory use of psychometric scales that require subjective assessments. These two deficiencies contribute to the publication of unreliable results. Considerable reporting bias has been found in safety and efficacy findings in psychotherapy and pharmacotherapy trials. Reporting bias can be carried forward to meta-analyses, a key source for clinical practice guidelines. The final result is the frequent overestimation of treatment effects that could impact patients and clinician-informed decisions. Mechanisms to prevent outcome reporting bias Prospective registration of trials and publication of results are the two major methods to reduce reporting bias. Prospective trial registration will allow checking whether they are published (so it will help to prevent publication bias) and, if published, whether those outcomes and analyses that were deemed as appropriate before trial commencement are actually published (hence helping to find out selective reporting of outcomes). Unfortunately, the rate of registered trials in mental health interventions is low and, frequently, of poor quality. Conclusion Clinicians should be prudent when interpreting the results of published trials and some meta-analyses – such as those conducted by scientists working for the sponsor company or those that only include published trials. Prescribers, however, should be confident when prescribing drugs following the summary of product characteristics, since regulatory agencies have access to all clinical trial results.
Collapse
|
17
|
Tsujimoto Y, Tsujimoto H, Kataoka Y, Kimachi M, Shimizu S, Ikenoue T, Fukuma S, Yamamoto Y, Fukuhara S. Majority of systematic reviews published in high-impact journals neglected to register the protocols: a meta-epidemiological study. J Clin Epidemiol 2017; 84:54-60. [PMID: 28242481 DOI: 10.1016/j.jclinepi.2017.02.008] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2016] [Revised: 12/12/2016] [Accepted: 02/17/2017] [Indexed: 01/26/2023]
Abstract
OBJECTIVES To describe the registration of systematic review (SR) protocols and examine whether or not registration reduced the outcome reporting bias in high-impact journals. STUDY DESIGN AND SETTING We searched MEDLINE via PubMed to identify SRs of randomized controlled trials of interventions. We included SRs published between August 2009 and June 2015 in the 10 general and internal medicinal journals with the highest impact factors in 2013. We examined the proportion of SR protocol registration and investigated the relationship between registration and outcome reporting bias using multivariable logistic regression. RESULTS Among the 284 included reviews, 60 (21%) protocols were registered. The proportion of registration increased from 5.6% in 2009 to 27% in 2015 (P for trend <0.001). Protocol registration was not associated with outcome reporting bias (adjusted odds ratio [OR] 0.85, 95% confidence interval [CI] 0.39-1.86). The association between Preferred Reporting Items for Systematic review and Meta-Analysis (PRISMA) adherence and protocol registration was not statistically significant (OR 1.09, 95% CI 0.59-2.01). CONCLUSIONS Six years after the launch of the PRISMA statement, the proportion of protocol registration in high-impact journals has increased some but remains low. The present study found no evidence suggesting that protocol registration reduced outcome reporting bias.
Collapse
Affiliation(s)
- Yasushi Tsujimoto
- Department of Healthcare Epidemiology, Graduate School of Medicine and Public Health, Kyoto University, Yoshida Konoe-cho, Sakyo-ku, Kyoto 606-8501, Japan
| | - Hiraku Tsujimoto
- Hospital Care Research Unit, Hyogo Prefectural Amagasaki General Medical Center, Higashi-Naniwa-Cho 2-17-77, Amagasaki, Hyogo 660-8550, Japan
| | - Yuki Kataoka
- Department of Healthcare Epidemiology, Graduate School of Medicine and Public Health, Kyoto University, Yoshida Konoe-cho, Sakyo-ku, Kyoto 606-8501, Japan
| | - Miho Kimachi
- Department of Healthcare Epidemiology, Graduate School of Medicine and Public Health, Kyoto University, Yoshida Konoe-cho, Sakyo-ku, Kyoto 606-8501, Japan
| | - Sayaka Shimizu
- Department of Healthcare Epidemiology, Graduate School of Medicine and Public Health, Kyoto University, Yoshida Konoe-cho, Sakyo-ku, Kyoto 606-8501, Japan
| | - Tatsuyoshi Ikenoue
- Department of Healthcare Epidemiology, Graduate School of Medicine and Public Health, Kyoto University, Yoshida Konoe-cho, Sakyo-ku, Kyoto 606-8501, Japan
| | - Shingo Fukuma
- Department of Healthcare Epidemiology, Graduate School of Medicine and Public Health, Kyoto University, Yoshida Konoe-cho, Sakyo-ku, Kyoto 606-8501, Japan
| | - Yosuke Yamamoto
- Department of Healthcare Epidemiology, Graduate School of Medicine and Public Health, Kyoto University, Yoshida Konoe-cho, Sakyo-ku, Kyoto 606-8501, Japan.
| | - Shunichi Fukuhara
- Department of Healthcare Epidemiology, Graduate School of Medicine and Public Health, Kyoto University, Yoshida Konoe-cho, Sakyo-ku, Kyoto 606-8501, Japan
| |
Collapse
|
18
|
Dal-Ré R, Marušić A. Prevention of selective outcome reporting: let us start from the beginning. Eur J Clin Pharmacol 2016; 72:1283-8. [PMID: 27484242 DOI: 10.1007/s00228-016-2112-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2016] [Accepted: 07/27/2016] [Indexed: 01/18/2023]
Abstract
BACKGROUND Healthcare professionals and patients could be negatively influenced in their judgments by articles and meta-analyses presenting selective outcome reporting. Clinical trials should be transparent from inception to the publication of results. To this end, trial prospective registration is an ethical and scientific requirement that have shown to be effective in preventing selective reporting of outcomes. However, even journals with a clear pre-registration policy publish trial results that were retrospectively registered. SITUATION Analyses of registration of randomized clinical trials recently published in top specialty journals and of meta-analyses with suspicion of including trials with outcome reporting bias have shown that retrospective registration is in the range from 56 to 76 %. This translates into publication of primary endpoints that differ from those included in the registry: some 30 % of trials showed discrepancies between the primary endpoint in the trial registry and the article. Furthermore, it has been shown that 8 % of all clinical trials published by 6 high-impact ICMJE-member journals was retrospectively registered after primary endpoint ascertainment could have had taken place, raising concerns that endpoints may not have been pre-specified, or were changed. With regards to meta-analyses, 34 % of Cochrane systematic reviews included one or more trials with a high suspicion of selective reporting bias for the primary outcome. PROPOSAL Retrospective registration of trials may foster selective outcome reporting unless journal editors implement specific quality control processes aiming to prevent or minimize this type of bias. Prospective registration of trials-and protocol public disclosure if proven effective in future studies-prevents outcome reporting bias, a must to ensure clinicians and patients have access to reliable clinical trial results. Journal editors should enforce, rather than encourage, appropriate measures to ensure publication of trials free of outcome reporting bias.
Collapse
|
19
|
Boccia S, Rothman KJ, Panic N, Flacco ME, Rosso A, Pastorino R, Manzoli L, La Vecchia C, Villari P, Boffetta P, Ricciardi W, Ioannidis JPA. Registration practices for observational studies on ClinicalTrials.gov indicated low adherence. J Clin Epidemiol 2015; 70:176-82. [PMID: 26386325 DOI: 10.1016/j.jclinepi.2015.09.009] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2015] [Revised: 08/26/2015] [Accepted: 09/09/2015] [Indexed: 10/23/2022]
Abstract
OBJECTIVE The study aims to assess the status of registration of observational studies. STUDY DESIGN AND SETTING We identified studies on cancer research with prospective recruitment of participants that were registered from February 2000 to December 2011 in ClinicalTrials.gov. We recorded the dates of registration and start of recruitment, outcomes, and description of statistical method. We searched for publications corresponding to the registered studies through May 31, 2014. RESULTS One thousand one hundred nine registered studies were eligible. Primary and secondary outcomes were reported in 809 (73.0%) and 464 (41.8%) of them. The date of registration preceded the month of the study start in 145 (13.8%) and coincided in 205 (19.5%). A total of 151 publications from 120 (10.8%) registered studies were identified. In 2 (33.3%) of the 6 publications where ClinicalTrials.gov reported that the study started recruitment after registration, and in 9 (50.0%) of 18 publications where ClinicalTrials.gov reported the same date for registration and start of recruitment, the articles showed that the study had actually started recruiting before registration. CONCLUSION During the period reviewed, few observational studies have been registered. Registration usually occurred after the study started, and prespecification of outcomes and statistical analysis rarely occurred.
Collapse
Affiliation(s)
- Stefania Boccia
- Section of Hygiene, Institute of Public Health, Università Cattolica del Sacro Cuore, L.go F. Vito, 1, Rome 00168, Italy.
| | - Kenneth J Rothman
- RTI Health Solutions, Research Triangle Institute, Research Triangle Park, NC, USA; Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
| | - Nikola Panic
- Section of Hygiene, Institute of Public Health, Università Cattolica del Sacro Cuore, L.go F. Vito, 1, Rome 00168, Italy
| | - Maria Elena Flacco
- Department of Medicine and Aging Sciences, University of Chieti, Via dei Vestini 5, 66013 Chieti, Italy; ASL Pescara, Via Renato Paolini 47, Pescara 65123, Italy
| | - Annalisa Rosso
- Department of Public Health and Infectious Diseases, Sapienza University of Rome, Piazzale Aldo Moro 5, Rome 00185, Italy
| | - Roberta Pastorino
- Section of Hygiene, Institute of Public Health, Università Cattolica del Sacro Cuore, L.go F. Vito, 1, Rome 00168, Italy
| | - Lamberto Manzoli
- Department of Medicine and Aging Sciences, University of Chieti, Via dei Vestini 5, 66013 Chieti, Italy; CeSI Biotech, Via Colle dell'Ara, Chieti 66100, Italy
| | - Carlo La Vecchia
- Department of Clinical Sciences and Community Health, University of Milan, Via Vanzetti 5, 20133, Milan, Italy
| | - Paolo Villari
- Department of Public Health and Infectious Diseases, Sapienza University of Rome, Piazzale Aldo Moro 5, Rome 00185, Italy
| | - Paolo Boffetta
- Population Sciences, Tisch Cancer Center and Institute for Translational Epidemiology, Icahn School of Medicine at Mount Sinai, New York, NY 10029, USA
| | - Walter Ricciardi
- Section of Hygiene, Institute of Public Health, Università Cattolica del Sacro Cuore, L.go F. Vito, 1, Rome 00168, Italy
| | - John P A Ioannidis
- Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA; Department of Health Research and Policy, Stanford University School of Medicine, Stanford, CA, USA; Department of Statistics, Stanford University School of Humanities and Sciences, Stanford, CA, USA; Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA
| |
Collapse
|
20
|
Frosi G, Riley RD, Williamson PR, Kirkham JJ. Multivariate meta-analysis helps examine the impact of outcome reporting bias in Cochrane rheumatoid arthritis reviews. J Clin Epidemiol 2014; 68:542-50. [PMID: 25537265 DOI: 10.1016/j.jclinepi.2014.11.017] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2014] [Revised: 09/24/2014] [Accepted: 11/24/2014] [Indexed: 10/24/2022]
Abstract
OBJECTIVES Outcome reporting bias (ORB) is a threat to validity of systematic reviews. Multivariate meta-analysis (MVMA) can potentially reduce the impact of ORB when outcomes are correlated. The aim of this study was to assess ORB in Cochrane systematic reviews of rheumatoid arthritis and to demonstrate how MVMA may examine its impact. STUDY DESIGN AND SETTING Reviews were assessed for ORB in relation to eight outcomes for rheumatoid arthritis using a nine-point classification system. Impact of ORB was assessed by comparing estimates from univariate meta-analysis and MVMA models. RESULTS ORB assessment was applied in 21 included reviews, and all contained missing data on at least one of the eight outcomes. ORB was highly suspected in 247 (22%) of the 1,118 evaluable outcomes from 155 assessable trials. MVMA and univariate results sometimes differed importantly. The maximum change in treatment effect estimate between MVMA and univariate meta-analysis approach was found to be 176% for one of the outcome considered. CONCLUSION ORB has the potential to affect the conclusions in meta-analyses. This could be avoided if trialists reported on all measured outcomes in full. If missing outcome data are unobtainable, MVMA is useful to examine the impact of missing outcomes and ORB on conclusions.
Collapse
Affiliation(s)
- Giacomo Frosi
- Department of Biostatistics, University of Liverpool, 1st Floor Duncan Building, Daulby Street, Liverpool, L69 3GA, United Kingdom.
| | - Richard D Riley
- School of Health and Population Sciences, University of Birmingham, Edgbaston, Birmingham, B15 2TT, United Kingdom
| | - Paula R Williamson
- Department of Biostatistics, University of Liverpool, 1st Floor Duncan Building, Daulby Street, Liverpool, L69 3GA, United Kingdom
| | - Jamie J Kirkham
- Department of Biostatistics, University of Liverpool, 1st Floor Duncan Building, Daulby Street, Liverpool, L69 3GA, United Kingdom
| |
Collapse
|
21
|
Abstract
It is often suspected (or known) that outcomes published in medical trials are selectively reported. A systematic review for a particular outcome of interest can only include studies where that outcome was reported and so may omit, for example, a study that has considered several outcome measures but only reports those giving significant results. Using the methodology of the Outcome Reporting Bias (ORB) in Trials study of (Kirkham and others, 2010. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. British Medical Journal 340, c365), we suggest a likelihood-based model for estimating the effect of ORB on confidence intervals and p-values in meta-analysis. Correcting for bias has the effect of moving estimated treatment effects toward the null and hence more cautious assessments of significance. The bias can be very substantial, sometimes sufficient to completely overturn previous claims of significance. We re-analyze two contrasting examples, and derive a simple fixed effects approximation that can be used to give an initial estimate of the effect of ORB in practice.
Collapse
Affiliation(s)
- John Copas
- Department of Statistics, University of Warwick, Coventry CV4 7AL, UK
| | | | | | | |
Collapse
|