1
|
Hu T, Zhou Y, Hattori S. Sensitivity analysis for publication bias in meta-analysis of sparse data based on exact likelihood. Biometrics 2024; 80:ujae092. [PMID: 39253987 DOI: 10.1093/biomtc/ujae092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 07/30/2024] [Accepted: 08/21/2024] [Indexed: 09/11/2024]
Abstract
Meta-analysis is a powerful tool to synthesize findings from multiple studies. The normal-normal random-effects model is widely used to account for between-study heterogeneity. However, meta-analyses of sparse data, which may arise when the event rate is low for binary or count outcomes, pose a challenge to the normal-normal random-effects model in the accuracy and stability in inference since the normal approximation in the within-study model may not be good. To reduce bias arising from data sparsity, the generalized linear mixed model can be used by replacing the approximate normal within-study model with an exact model. Publication bias is one of the most serious threats in meta-analysis. Several quantitative sensitivity analysis methods for evaluating the potential impacts of selective publication are available for the normal-normal random-effects model. We propose a sensitivity analysis method by extending the likelihood-based sensitivity analysis with the $t$-statistic selection function of Copas to several generalized linear mixed-effects models. Through applications of our proposed method to several real-world meta-analyses and simulation studies, the proposed method was proven to outperform the likelihood-based sensitivity analysis based on the normal-normal model. The proposed method would give useful guidance to address publication bias in the meta-analysis of sparse data.
Collapse
Affiliation(s)
- Taojun Hu
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, 565-0871, Japan
- Department of Biostatistics, School of Public Health, Peking University, Beijing, 100191, China
| | - Yi Zhou
- Beijing International Center for Mathematical Research, Peking University, Beijing, 100871, China
| | - Satoshi Hattori
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, 565-0871, Japan
- Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary Research Initiatives (OTRI), Osaka University, Osaka, 565-0871, Japan
| |
Collapse
|
2
|
Zhou Y, Huang A, Hattori S. Nonparametric worst-case bounds for publication bias on the summary receiver operating characteristic curve. Biometrics 2024; 80:ujae080. [PMID: 39225122 DOI: 10.1093/biomtc/ujae080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 07/26/2024] [Accepted: 08/11/2024] [Indexed: 09/04/2024]
Abstract
The summary receiver operating characteristic (SROC) curve has been recommended as one important meta-analytical summary to represent the accuracy of a diagnostic test in the presence of heterogeneous cutoff values. However, selective publication of diagnostic studies for meta-analysis can induce publication bias (PB) on the estimate of the SROC curve. Several sensitivity analysis methods have been developed to quantify PB on the SROC curve, and all these methods utilize parametric selection functions to model the selective publication mechanism. The main contribution of this article is to propose a new sensitivity analysis approach that derives the worst-case bounds for the SROC curve by adopting nonparametric selection functions under minimal assumptions. The estimation procedures of the worst-case bounds use the Monte Carlo method to approximate the bias on the SROC curves along with the corresponding area under the curves, and then the maximum and minimum values of PB under a range of marginal selection probabilities are optimized by nonlinear programming. We apply the proposed method to real-world meta-analyses to show that the worst-case bounds of the SROC curves can provide useful insights for discussing the robustness of meta-analytical findings on diagnostic test accuracy.
Collapse
Affiliation(s)
- Yi Zhou
- Beijing International Center for Mathematical Research, Peking University, Beijing, 100871, China
| | - Ao Huang
- Department of Medical Statistics, University Medical Center Göttingen, Göttingen, 37073, Germany
| | - Satoshi Hattori
- Department of Biomedical Statistics, Graduate School of Medicine, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, 565-0871, Japan
| |
Collapse
|
3
|
Mizutani S, Zhou Y, Tian YS, Takagi T, Ohkubo T, Hattori S. DTAmetasa: An R shiny application for meta-analysis of diagnostic test accuracy and sensitivity analysis of publication bias. Res Synth Methods 2023; 14:916-925. [PMID: 37640914 DOI: 10.1002/jrsm.1666] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Revised: 07/12/2023] [Accepted: 08/15/2023] [Indexed: 08/31/2023]
Abstract
Meta-analysis of diagnostic test accuracy (DTA) is a powerful statistical method for synthesizing and evaluating the diagnostic capacity of medical tests and has been extensively used by clinical physicians and healthcare decision-makers. However, publication bias (PB) threatens the validity of meta-analysis of DTA. Some statistical methods have been developed to deal with PB in meta-analysis of DTA, but implementing these methods requires high-level statistical knowledge and programming skill. To assist non-technical users in running most routines in meta-analysis of DTA and handling with PB, we developed an interactive application, DTAmetasa. DTAmetasa is developed as a web-based graphical user interface based on the R shiny framework. It allows users to upload data and conduct meta-analysis of DTA by "point and click" operations. Moreover, DTAmetasa provides the sensitivity analysis of PB and presents the graphical results to evaluate the magnitude of the PB under various publication mechanisms. In this study, we introduce the functionalities of DTAmetasa and use the real-world meta-analysis to show its capacity for dealing with PB.
Collapse
Affiliation(s)
- Shosuke Mizutani
- Graduate School of Pharmaceutical Sciences, Osaka University, Osaka, Japan
| | - Yi Zhou
- Beijing International Center for Mathematical Research, Peking University, Beijing, China
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Yu-Shi Tian
- Graduate School of Pharmaceutical Sciences, Osaka University, Osaka, Japan
| | - Tatsuya Takagi
- Graduate School of Pharmaceutical Sciences, Osaka University, Osaka, Japan
| | - Tadayasu Ohkubo
- Graduate School of Pharmaceutical Sciences, Osaka University, Osaka, Japan
| | - Satoshi Hattori
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, Japan
- Integrated Frontier Research for Open and Transdisciplinary Research Initiatives, Graduate School of Medicine, Osaka University, Osaka, Japan
| |
Collapse
|
4
|
Huang A, Morikawa K, Friede T, Hattori S. Adjusting for publication bias in meta-analysis via inverse probability weighting using clinical trial registries. Biometrics 2023; 79:2089-2102. [PMID: 36602873 DOI: 10.1111/biom.13822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Accepted: 12/15/2022] [Indexed: 01/06/2023]
Abstract
Publication bias is a major concern in conducting systematic reviews and meta-analyses. Various sensitivity analysis or bias-correction methods have been developed based on selection models, and they have some advantages over the widely used trim-and-fill bias-correction method. However, likelihood methods based on selection models may have difficulty in obtaining precise estimates and reasonable confidence intervals, or require a rather complicated sensitivity analysis process. Herein, we develop a simple publication bias adjustment method by utilizing the information on conducted but still unpublished trials from clinical trial registries. We introduce an estimating equation for parameter estimation in the selection function by regarding the publication bias issue as a missing data problem under the missing not at random assumption. With the estimated selection function, we introduce the inverse probability weighting (IPW) method to estimate the overall mean across studies. Furthermore, the IPW versions of heterogeneity measures such as the between-study variance and the I2 measure are proposed. We propose methods to construct confidence intervals based on asymptotic normal approximation as well as on parametric bootstrap. Through numerical experiments, we observed that the estimators successfully eliminated bias, and the confidence intervals had empirical coverage probabilities close to the nominal level. On the other hand, the confidence interval based on asymptotic normal approximation is much wider in some scenarios than the bootstrap confidence interval. Therefore, the latter is recommended for practical use.
Collapse
Affiliation(s)
- Ao Huang
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Kosuke Morikawa
- Graduate School of Engineering Science, Osaka University, Toyonaka, Osaka, Japan
| | - Tim Friede
- Department of Medical Statistics, University Medical Center Göttingen, Göttingen, Germany
| | - Satoshi Hattori
- Department of Biomedical Statistics, Graduate School of Medicine, Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary ResearchInitiatives (OTRI), Osaka University, Suita City, Osaka, Japan
| |
Collapse
|
5
|
Zhou Y, Huang A, Hattori S. A likelihood-based sensitivity analysis for publication bias on the summary receiver operating characteristic in meta-analysis of diagnostic test accuracy. Stat Med 2023; 42:781-798. [PMID: 36584693 DOI: 10.1002/sim.9643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 08/21/2022] [Accepted: 12/18/2022] [Indexed: 01/01/2023]
Abstract
In meta-analysis of diagnostic test accuracy, the summary receiver operating characteristic (SROC) curve is a recommended method to summarize the diagnostic capacity of a medical test in the presence of study-specific cutoff values. The SROC curve can be estimated by bivariate modeling of pairs of sensitivity and specificity across multiple diagnostic studies, and the area under the SROC curve (SAUC) gives the aggregate estimate of diagnostic test accuracy. However, publication bias is a major threat to the validity of the estimates. To make inference of the impact of publication bias on the SROC curve or the SAUC, we propose a sensitivity analysis method by extending the likelihood-based sensitivity analysis of Copas. In the proposed method, the SROC curve or the SAUC are estimated by maximizing the likelihood constrained by different values of the marginal probability of selective publication under different mechanisms of selective publication. A cutoff-dependent selection function is developed to model the selective publication mechanism via thet $$ t $$ -type statistics orP $$ P $$ -value of the linear combination of the logit-transformed sensitivity and specificity from the published studies. It allows us to model selective publication suggested by the funnel plots of sensitivity, specificity, or diagnostic odds ratio, which are often observed in practice. A real meta-analysis of diagnostic test accuracy is re-analyzed to illustrate the proposed method, and simulation studies are conducted to evaluate its performance.
Collapse
Affiliation(s)
- Yi Zhou
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Ao Huang
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Satoshi Hattori
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, Japan
- Integrated Frontier Research for Medical Science Division, Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| |
Collapse
|
6
|
Infinite diameter confidence sets in Hedges’ publication bias model. J Korean Stat Soc 2022. [DOI: 10.1007/s42952-022-00169-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractMeta-analysis, the statistical analysis of results from separate studies, is a fundamental building block of science. But the assumptions of classical meta-analysis models are not satisfied whenever publication bias is present, which causes inconsistent parameter estimates. Hedges’ selection function model takes publication bias into account, but estimating and inferring with this model is tough for some datasets. Using a generalized Gleser–Hwang theorem, we show there is no confidence set of guaranteed finite diameter for the parameters of Hedges’ selection model. This result provides a partial explanation for why inference with Hedges’ selection model is fraught with difficulties.
Collapse
|
7
|
Huang A, Komukai S, Friede T, Hattori S. Using clinical trial registries to inform Copas selection model for publication bias in meta-analysis. Res Synth Methods 2021; 12:658-673. [PMID: 34169657 DOI: 10.1002/jrsm.1506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 04/02/2021] [Accepted: 06/01/2021] [Indexed: 11/10/2022]
Abstract
Prospective registration of study protocols in clinical trial registries is a useful way to minimize the risk of publication bias in meta-analysis, and several clinical trial registries are available nowadays. However, they are mainly used as a tool for searching studies and information submitted to the registries has not been utilized as efficiently as it could. In addressing publication bias in meta-analyses, sensitivity analysis with the Copas selection model is a more objective alternative to widely-used graphical methods such as the funnel-plot and the trim-and-fill method. Despite its ability to quantify the potential impact of publication bias, the Copas selection model relies on sensitivity analyses, in which some parameters are varied across a certain range. This may result in some difficulty in interpreting the results. In this paper, we propose an alternative inference procedure for the Copas selection model by utilizing information from clinical trial registries. Our method provides a simple and accurate way to estimate all unknown parameters of the Copas selection model. A simulation study revealed that our proposed method resulted in smaller biases and more accurate confidence intervals than existing methods. Furthermore, three published meta-analyses were re-analyzed to demonstrate how to implement the proposed method in practice.
Collapse
Affiliation(s)
- Ao Huang
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Sho Komukai
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, Japan
| | - Tim Friede
- Department of Medical Statistics, University Medical Center Göttingen, Göttingen, Germany
| | - Satoshi Hattori
- Department of Biomedical Statistics, Graduate School of Medicine, Osaka University, Osaka, Japan.,Institute for Open and Transdisciplinary Research Initiatives, Osaka University, Osaka, Japan
| |
Collapse
|
8
|
Carter EC, Schönbrodt FD, Gervais WM, Hilgard J. Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods. ADVANCES IN METHODS AND PRACTICES IN PSYCHOLOGICAL SCIENCE 2019. [DOI: 10.1177/2515245919847196] [Citation(s) in RCA: 169] [Impact Index Per Article: 28.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Publication bias and questionable research practices in primary research can lead to badly overestimated effects in meta-analysis. Methodologists have proposed a variety of statistical approaches to correct for such overestimation. However, it is not clear which methods work best for data typically seen in psychology. Here, we present a comprehensive simulation study in which we examined how some of the most promising meta-analytic methods perform on data that might realistically be produced by research in psychology. We simulated several levels of questionable research practices, publication bias, and heterogeneity, and used study sample sizes empirically derived from the literature. Our results clearly indicated that no single meta-analytic method consistently outperformed all the others. Therefore, we recommend that meta-analysts in psychology focus on sensitivity analyses—that is, report on a variety of methods, consider the conditions under which these methods fail (as indicated by simulation studies such as ours), and then report how conclusions might change depending on which conditions are most plausible. Moreover, given the dependence of meta-analytic methods on untestable assumptions, we strongly recommend that researchers in psychology continue their efforts to improve the primary literature and conduct large-scale, preregistered replications. We provide detailed results and simulation code at https://osf.io/rf3ys and interactive figures at http://www.shinyapps.org/apps/metaExplorer/ .
Collapse
Affiliation(s)
- Evan C. Carter
- Human Research and Engineering Directorate, U.S. Army Research Laboratory, Aberdeen, Maryland
| | | | | | | |
Collapse
|
9
|
Yin P, Shi JQ. Simulation-based sensitivity analysis for non-ignorably missing data. Stat Methods Med Res 2017; 28:289-308. [PMID: 28747095 DOI: 10.1177/0962280217722382] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Collapse
Affiliation(s)
- Peng Yin
- 1 Department of Biostatistics, University of Liverpool, UK
| | - Jian Q Shi
- 2 School of Mathematics & Statistics, Newcastle University, UK
| |
Collapse
|
10
|
Liu Y, DeSantis SM, Chen Y. Bayesian mixed treatment comparisons meta-analysis for correlated outcomes subject to reporting bias. J R Stat Soc Ser C Appl Stat 2017. [PMID: 29540936 DOI: 10.1111/rssc.12220] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Many randomized controlled trials (RCTs) report more than one primary outcome. As a result, multivariate meta-analytic methods for the assimilation of treatment effects in systematic reviews of RCTs have received increasing attention in the literature. These methods show promise with respect to bias reduction and efficiency gain compared to univariate meta-analysis. However, most methods for multivariate meta-analysis have focused on pairwise treatment comparisons (i.e., when the number of treatments is two). Current methods for mixed treatment comparisons (MTC) meta-analysis (i.e., when the number of treatments is more than two) have focused on univariate or very recently, bivariate outcomes. To broaden their application, we propose a framework for MTC meta-analysis of multivariate (≥ 2) outcomes where the correlations among multivariate outcomes within- and between-studies are accounted for through copulas, and the joint modeling of multivariate random effects, respectively. We consider a Bayesian hierarchical model using Markov Chain Monte Carlo methods for estimation. An important feature of the proposed framework is that it allows for borrowing of information across correlated outcomes. We show via simulation that our approach reduces the impact of outcome reporting bias (ORB) in a variety of missing outcome scenarios. We apply the method to a systematic review of RCTs of pharmacological treatments for alcohol dependence, which tends to report multiple outcomes potentially subject to ORB.
Collapse
Affiliation(s)
- Yulun Liu
- Department of Biostatistics, The University of Texas Health Science Center Houston, Houston, Texas 77030, U.S.A
| | - Stacia M DeSantis
- Department of Biostatistics, The University of Texas Health Science Center Houston, Houston, Texas 77030, U.S.A
| | - Yong Chen
- Department of Biostatistics and Epidemiology, University of Pennsylvania, Philadelphia, Pennsylvania, 19104, U.S.A
| |
Collapse
|
11
|
Zhu Q, Carriere KC. Detecting and correcting for publication bias in meta-analysis – A truncated normal distribution approach. Stat Methods Med Res 2016; 27:2722-2741. [DOI: 10.1177/0962280216684671] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
Collapse
Affiliation(s)
- Qiaohao Zhu
- Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, Canada
| | - KC Carriere
- Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
12
|
Kulinskaya E, Huggins R, Dogo SH. Sequential biases in accumulating evidence. Res Synth Methods 2016; 7:294-305. [PMID: 26626562 PMCID: PMC5031232 DOI: 10.1002/jrsm.1185] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2015] [Revised: 07/03/2015] [Accepted: 08/27/2015] [Indexed: 11/10/2022]
Abstract
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed 'sequential decision bias' and 'sequential design bias', are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed-effect and the random-effects models of meta-analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence-based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- Elena Kulinskaya
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK.
| | - Richard Huggins
- Department of Mathematics and Statistics, University of Melbourne, Melbourne, Australia
| | - Samson Henry Dogo
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK
| |
Collapse
|
13
|
Bem D, Tressoldi P, Rabeyron T, Duggan M. Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events. F1000Res 2015; 4:1188. [PMID: 26834996 PMCID: PMC4706048 DOI: 10.12688/f1000research.7177.1] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/20/2015] [Indexed: 11/05/2023] Open
Abstract
In 2011, one of the authors (DJB) published a report of nine experiments in the Journal of Personality and Social Psychology purporting to demonstrate that an individual's cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition. To encourage replications, all materials needed to conduct them were made available on request. We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2 × 10 (-10 ) with an effect size (Hedges' g) of 0.09. A Bayesian analysis yielded a Bayes Factor of 1.4 × 10 (9), greatly exceeding the criterion value of 100 for "decisive evidence" in support of the experimental hypothesis. When DJB's original experiments are excluded, the combined effect size for replications by independent investigators is 0.06, z = 4.16, p = 1.1 × 10 (-5), and the BF value is 3,853, again exceeding the criterion for "decisive evidence." The number of potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01 is 544, and seven of eight additional statistical tests support the conclusion that the database is not significantly compromised by either selection bias or by " p-hacking"-the selective suppression of findings or analyses that failed to yield statistical significance. P-curve analysis, a recently introduced statistical technique, estimates the true effect size of our database to be 0.20, virtually identical to the effect size of DJB's original experiments (0.22) and the closely related "presentiment" experiments (0.21). We discuss the controversial status of precognition and other anomalous effects collectively known as psi.
Collapse
Affiliation(s)
- Daryl Bem
- Cornell University, New York, NY, 10011, USA
| | | | - Thomas Rabeyron
- Université de Nantes, Nantes, 44300, France
- University of Edinburgh, Edinburgh, Scotland, EH8 9YL, UK
| | - Michael Duggan
- Nottingham Trent University, Nottingham, England, NG1 4BU, UK
| |
Collapse
|
14
|
Bem D, Tressoldi P, Rabeyron T, Duggan M. Feeling the future: A meta-analysis of 90 experiments on the anomalous anticipation of random future events. F1000Res 2015; 4:1188. [PMID: 26834996 PMCID: PMC4706048 DOI: 10.12688/f1000research.7177.2] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/22/2016] [Indexed: 01/28/2023] Open
Abstract
In 2011, one of the authors (DJB) published a report of nine experiments in the Journal of Personality and Social Psychology purporting to demonstrate that an individual's cognitive and affective responses can be influenced by randomly selected stimulus events that do not occur until after his or her responses have already been made and recorded, a generalized variant of the phenomenon traditionally denoted by the term precognition. To encourage replications, all materials needed to conduct them were made available on request. We here report a meta-analysis of 90 experiments from 33 laboratories in 14 countries which yielded an overall effect greater than 6 sigma, z = 6.40, p = 1.2 × 10 (-10 ) with an effect size (Hedges' g) of 0.09. A Bayesian analysis yielded a Bayes Factor of 5.1 × 10 (9), greatly exceeding the criterion value of 100 for "decisive evidence" in support of the experimental hypothesis. When DJB's original experiments are excluded, the combined effect size for replications by independent investigators is 0.06, z = 4.16, p = 1.1 × 10 (-5), and the BF value is 3,853, again exceeding the criterion for "decisive evidence." The number of potentially unretrieved experiments required to reduce the overall effect size of the complete database to a trivial value of 0.01 is 544, and seven of eight additional statistical tests support the conclusion that the database is not significantly compromised by either selection bias or by intense " p-hacking"-the selective suppression of findings or analyses that failed to yield statistical significance. P-curve analysis, a recently introduced statistical technique, estimates the true effect size of the experiments to be 0.20 for the complete database and 0.24 for the independent replications, virtually identical to the effect size of DJB's original experiments (0.22) and the closely related "presentiment" experiments (0.21). We discuss the controversial status of precognition and other anomalous effects collectively known as psi.
Collapse
Affiliation(s)
- Daryl Bem
- Cornell University, New York, NY, 10011, USA
| | | | - Thomas Rabeyron
- Université de Nantes, Nantes, 44300, France
- University of Edinburgh, Edinburgh, Scotland, EH8 9YL, UK
| | - Michael Duggan
- Nottingham Trent University, Nottingham, England, NG1 4BU, UK
| |
Collapse
|
15
|
Nuijten MB, van Assen MALM, Veldkamp CLS, Wicherts JM. The Replication Paradox: Combining Studies can Decrease Accuracy of Effect Size Estimates. REVIEW OF GENERAL PSYCHOLOGY 2015. [DOI: 10.1037/gpr0000034] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Replication is often viewed as the demarcation between science and nonscience. However, contrary to the commonly held view, we show that in the current (selective) publication system replications may increase bias in effect size estimates. Specifically, we examine the effect of replication on bias in estimated population effect size as a function of publication bias and the studies’ sample size or power. We analytically show that incorporating the results of published replication studies will in general not lead to less bias in the estimated population effect size. We therefore conclude that mere replication will not solve the problem of overestimation of effect sizes. We will discuss the implications of our findings for interpreting results of published and unpublished studies, and for conducting and interpreting results of meta-analyses. We also discuss solutions for the problem of overestimation of effect sizes, such as discarding and not publishing small studies with low power, and implementing practices that completely eliminate publication bias (e.g., study registration).
Collapse
|
16
|
Röver C, Andreas S, Friede T. Evidence synthesis for count distributions based on heterogeneous and incomplete aggregated data. Biom J 2015; 58:170-85. [DOI: 10.1002/bimj.201300288] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2013] [Revised: 11/19/2014] [Accepted: 02/15/2015] [Indexed: 11/08/2022]
Affiliation(s)
- Christian Röver
- Department of Medical Statistics; University Medical Center Göttingen; Humboldtallee 32 37073 Göttingen Germany
| | - Stefan Andreas
- Lungenfachklinik Immenhausen; Robert-Koch-Straße 3 34376 Immenhausen Germany
- Clinics for Cardiology and Pulmonology; University Medical Center Göttingen; Robert-Koch-Straße 40 37099 Göttingen Germany
| | - Tim Friede
- Department of Medical Statistics; University Medical Center Göttingen; Humboldtallee 32 37073 Göttingen Germany
| |
Collapse
|
17
|
Jin ZC, Zhou XH, He J. Statistical methods for dealing with publication bias in meta-analysis. Stat Med 2014; 34:343-60. [PMID: 25363575 DOI: 10.1002/sim.6342] [Citation(s) in RCA: 151] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2014] [Accepted: 10/07/2014] [Indexed: 01/17/2023]
Abstract
Publication bias is an inevitable problem in the systematic review and meta-analysis. It is also one of the main threats to the validity of meta-analysis. Although several statistical methods have been developed to detect and adjust for the publication bias since the beginning of 1980s, some of them are not well known and are not being used properly in both the statistical and clinical literature. In this paper, we provided a critical and extensive discussion on the methods for dealing with publication bias, including statistical principles, implementation, and software, as well as the advantages and limitations of these methods. We illustrated a practical application of these methods in a meta-analysis of continuous support for women during childbirth.
Collapse
Affiliation(s)
- Zhi-Chao Jin
- Department of Health Statistics, Second Military Medical University, No. 800 Xiangyin Road, Shanghai, 200433, China
| | | | | |
Collapse
|
18
|
Mavridis D, Welton NJ, Sutton A, Salanti G. A selection model for accounting for publication bias in a full network meta-analysis. Stat Med 2014; 33:5399-412. [PMID: 25316006 DOI: 10.1002/sim.6321] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2013] [Revised: 09/15/2014] [Accepted: 09/16/2014] [Indexed: 12/26/2022]
Abstract
Copas and Shi suggested a selection model to explore the potential impact of publication bias via sensitivity analysis based on assumptions for the probability of publication of trials conditional on the precision of their results. Chootrakool et al. extended this model to three-arm trials but did not fully account for the implications of the consistency assumption, and their model is difficult to generalize for complex network structures with more than three treatments. Fitting these selection models within a frequentist setting requires maximization of a complex likelihood function, and identification problems are common. We have previously presented a Bayesian implementation of the selection model when multiple treatments are compared with a common reference treatment. We now present a general model suitable for complex, full network meta-analysis that accounts for consistency when adjusting results for publication bias. We developed a design-by-treatment selection model to describe the mechanism by which studies with different designs (sets of treatments compared in a trial) and precision may be selected for publication. We fit the model in a Bayesian setting because it avoids the numerical problems encountered in the frequentist setting, it is generalizable with respect to the number of treatments and study arms, and it provides a flexible framework for sensitivity analysis using external knowledge. Our model accounts for the additional uncertainty arising from publication bias more successfully compared to the standard Copas model or its previous extensions. We illustrate the methodology using a published triangular network for the failure of vascular graft or arterial patency.
Collapse
Affiliation(s)
- Dimitris Mavridis
- Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina, Greece; Department of Primary Education, University of Ioannina, Ioannina, Greece
| | | | | | | |
Collapse
|
19
|
Gjerdevik M, Heuch I. Improving the error rates of the Begg and Mazumdar test for publication bias in fixed effects meta-analysis. BMC Med Res Methodol 2014; 14:109. [PMID: 25245217 PMCID: PMC4193136 DOI: 10.1186/1471-2288-14-109] [Citation(s) in RCA: 45] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 09/17/2014] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND The rank correlation test introduced by Begg and Mazumdar is extensively used in meta-analysis to test for publication bias in clinical and epidemiological studies. It is based on correlating the standardized treatment effect with the variance of the treatment effect using Kendall's tau as the measure of association. To our knowledge, the operational characteristics regarding the significance level of the test have not, however, been fully assessed. METHODS We propose an alternative rank correlation test to improve the error rates of the original Begg and Mazumdar test. This test is based on the simulated distribution of the estimated measure of association, conditional on sampling variances. Furthermore, Spearman's rho is suggested as an alternative rank correlation coefficient. The attained level and power of the tests are studied by simulations of meta-analyses assuming the fixed effects model. RESULTS The significance levels of the original Begg and Mazumdar test often deviate considerably from the nominal level, the null hypothesis being rejected too infrequently. It is proven mathematically that the assumptions for using the rank correlation test are not strictly satisfied. The pairs of variables fail to be independent, and there is a correlation between the standardized effect sizes and sampling variances under the null hypothesis of no publication bias. In the meta-analysis setting, the adverse consequences of a false negative test are more profound than the disadvantages of a false positive test. Our alternative test improves the error rates in fixed effects meta-analysis. Its significance level equals the nominal value, and the Type II error rate is reduced. In small data sets Spearman's rho should be preferred to Kendall's tau as the measure of association. CONCLUSIONS As the attained significance levels of the test introduced by Begg and Mazumdar often deviate greatly from the nominal level, modified rank correlation tests, improving the error rates, should be preferred when testing for publication bias assuming fixed effects meta-analysis.
Collapse
Affiliation(s)
- Miriam Gjerdevik
- />Department of Mathematics, University of Bergen, P. O. Box 7800, N-5020 Bergen, Norway
- />Department of Global Public Health and Primary Care, University of Bergen, P. O. Box 7804, N-5018 Bergen, Norway
| | - Ivar Heuch
- />Department of Mathematics, University of Bergen, P. O. Box 7800, N-5020 Bergen, Norway
| |
Collapse
|
20
|
Genbäck M, Stanghellini E, de Luna X. Uncertainty intervals for regression parameters with non-ignorable missingness in the outcome. Stat Pap (Berl) 2014. [DOI: 10.1007/s00362-014-0610-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
21
|
Verde PE, Ohmann C. Combining randomized and non-randomized evidence in clinical research: a review of methods and applications. Res Synth Methods 2014; 6:45-62. [DOI: 10.1002/jrsm.1122] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Revised: 04/11/2014] [Accepted: 04/21/2014] [Indexed: 12/11/2022]
Affiliation(s)
- Pablo E. Verde
- Coordination Center for Clinical Trials; University of Duesseldorf; Germany
| | - Christian Ohmann
- Coordination Center for Clinical Trials; University of Duesseldorf; Germany
| |
Collapse
|
22
|
Kim NY, Bangdiwala SI, Thaler K, Gartlehner G. SAMURAI: Sensitivity analysis of a meta-analysis with unpublished but registered analytical investigations (software). Syst Rev 2014; 3:27. [PMID: 24641974 PMCID: PMC4021727 DOI: 10.1186/2046-4053-3-27] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/11/2013] [Accepted: 02/03/2014] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND The non-availability of clinical trial results contributes to publication bias, diminishing the validity of systematic reviews and meta-analyses. Although clinical trial registries have been established to reduce non-publication, the results from over half of all trials registered in ClinicalTrials.gov remain unpublished even 30 months after completion. Our goals were i) to utilize information available in registries (specifically, the number and sample sizes of registered unpublished studies) to gauge the sensitivity of a meta-analysis estimate of the effect size and its confidence interval to the non-publication of studies and ii) to develop user-friendly open-source software to perform this quantitative sensitivity analysis. METHODS The open-source software, the R package SAMURAI, was developed using R functions available in the R package metafor. The utility of SAMURAI is illustrated with two worked examples. RESULTS Our open-source software SAMURAI, can handle meta-analytic datasets of clinical trials with two independent treatment arms. Both binary and continuous outcomes are supported. For each unpublished study, the dataset requires only the sample sizes of each treatment arm and the user predicted 'outlook' for the studies. The user can specify five outlooks ranging from 'very positive' (i.e., very favorable towards intervention) to 'very negative' (i.e., very favorable towards control).SAMURAI assumes that control arms of unpublished studies have effects similar to the effect across control arms of published studies. For each experimental arm of an unpublished study, utilizing the user-provided outlook, SAMURAI randomly generates an effect estimate using a probability distribution, which may be based on a summary effect across published trials. SAMURAI then calculates the estimated summary treatment effect with a random effects model (DerSimonian & Laird method), and outputs the result as a forest plot. CONCLUSIONS To our knowledge, SAMURAI is currently the only tool that allows systematic reviewers to incorporate information about sample sizes of treatment groups in registered but unpublished clinical trials in their assessment of the potential impact of publication bias on meta-analyses. SAMURAI produces forest plots for visualizing how inclusion of registered unpublished studies might change the results of a meta-analysis. We hope systematic reviewers will find SAMURAI to be a useful addition to their toolkit.
Collapse
Affiliation(s)
| | - Shrikant I Bangdiwala
- Department of Biostatistics, University of North Carolina, Chapel Hill, NC, 27599, USA.
| | | | | |
Collapse
|
23
|
Copas J, Dwan K, Kirkham J, Williamson P. A model-based correction for outcome reporting bias in meta-analysis. Biostatistics 2013; 15:370-83. [PMID: 24215031 DOI: 10.1093/biostatistics/kxt046] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
It is often suspected (or known) that outcomes published in medical trials are selectively reported. A systematic review for a particular outcome of interest can only include studies where that outcome was reported and so may omit, for example, a study that has considered several outcome measures but only reports those giving significant results. Using the methodology of the Outcome Reporting Bias (ORB) in Trials study of (Kirkham and others, 2010. The impact of outcome reporting bias in randomised controlled trials on a cohort of systematic reviews. British Medical Journal 340, c365), we suggest a likelihood-based model for estimating the effect of ORB on confidence intervals and p-values in meta-analysis. Correcting for bias has the effect of moving estimated treatment effects toward the null and hence more cautious assessments of significance. The bias can be very substantial, sometimes sufficient to completely overturn previous claims of significance. We re-analyze two contrasting examples, and derive a simple fixed effects approximation that can be used to give an initial estimate of the effect of ORB in practice.
Collapse
Affiliation(s)
- John Copas
- Department of Statistics, University of Warwick, Coventry CV4 7AL, UK
| | | | | | | |
Collapse
|