1
|
Verde PE, Rosner GL. A Bias-Corrected Bayesian Nonparametric Model for Combining Studies With Varying Quality in Meta-Analysis. Biom J 2025; 67:e70034. [PMID: 39917836 PMCID: PMC11803498 DOI: 10.1002/bimj.70034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Revised: 11/14/2024] [Accepted: 11/18/2024] [Indexed: 02/11/2025]
Abstract
Bayesian nonparametric (BNP) approaches for meta-analysis have been developed to relax distributional assumptions and handle the heterogeneity of random effects distributions. These models account for possible clustering and multimodality of the random effects distribution. However, when we combine studies of varying quality, the resulting posterior is not only a combination of the results of interest but also factors threatening the integrity of the studies' results. We refer to these factors as the studies' internal validity biases (e.g., reporting bias, data quality, and patient selection bias). In this paper, we introduce a new meta-analysis model called the bias-corrected Bayesian nonparametric (BC-BNP) model, which aims to automatically correct for internal validity bias in meta-analysis by only using the reported effects and their standard errors. The BC-BNP model is based on a mixture of a parametric random effects distribution, which represents the model of interest, and a BNP model for the bias component. This model relaxes the parametric assumptions of the bias distribution of the model introduced by Verde. Using simulated data sets, we evaluate the BC-BNP model and illustrate its applications with two real case studies. Our results show several potential advantages of the BC-BNP model: (1) It can detect bias when present while producing results similar to a simple normal-normal random effects model when bias is absent. (2) Relaxing the parametric assumptions of the bias component does not affect the model of interest and yields consistent results with the model of Verde. (3) In some applications, a BNP model of bias offers a better understanding of the studies' biases by clustering studies with similar biases. We implemented the BC-BNP model in the R package jarbes, facilitating its practical application.
Collapse
Affiliation(s)
- Pablo Emilio Verde
- Coordination Center for Clinical TrialsUniversity Hospital Dusseldorf Heinrich Heine University of DusseldorfDusseldorfGermany
| | - Gary L. Rosner
- Division of Quantitative SciencesJohns Hopkins UniversityBaltimoreMarylandUSA
| |
Collapse
|
2
|
Suero M, Botella J, Duran JI, Blazquez-Rincón D. Reformulating the meta-analytical random effects model of the standardized mean difference as a mixture model. Behav Res Methods 2025; 57:74. [PMID: 39856379 PMCID: PMC11761815 DOI: 10.3758/s13428-024-02554-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/26/2024] [Indexed: 01/27/2025]
Abstract
The classical meta-analytical random effects model (REM) has some weaknesses when applied to the standardized mean difference, g. Essentially, the variance of the studies involved is taken as the conditional variance, given a δ value, instead of the unconditional variance. As a consequence, the estimators of the variances involve a dependency between the g values and their variances that distorts the estimates. The classical REM is expressed as a linear model and the variance of g is obtained through a framework of components of variance. Although the weaknesses of the REM are negligible in practical terms in a wide range of realistic scenarios, all together, they make up an approximate, simplified version of the meta-analytical random effects model. We present an alternative formulation, as a mixture model, and provide formulas for the expected value, variance and skewness of the marginal distribution of g. A Monte Carlo simulation supports the accuracy of the formulas. Then, unbiased estimators of both the mean and the variance of the true effects are proposed, and assessed through Monte Carlo simulations. The advantages of the mixture model formulation over the "classical" formulation are discussed.
Collapse
Affiliation(s)
- Manuel Suero
- Facultad de Psicología, Universidad Autónoma de Madrid, Campus de Cantoblanco, C/ Ivan Pavlov, 6, 28049, Madrid, Spain
| | - Juan Botella
- Facultad de Psicología, Universidad Autónoma de Madrid, Campus de Cantoblanco, C/ Ivan Pavlov, 6, 28049, Madrid, Spain.
| | - Juan I Duran
- Facultad de Psicología, Universidad Autónoma de Madrid, Campus de Cantoblanco, C/ Ivan Pavlov, 6, 28049, Madrid, Spain
| | | |
Collapse
|
3
|
Cao W, Chu H, Hanson T, Siegel L. A Bayesian nonparametric meta-analysis model for estimating the reference interval. Stat Med 2024; 43:1905-1919. [PMID: 38409859 DOI: 10.1002/sim.10001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 10/24/2023] [Accepted: 12/17/2023] [Indexed: 02/28/2024]
Abstract
A reference interval represents the normative range for measurements from a healthy population. It plays an important role in laboratory testing, as well as in differentiating healthy from diseased patients. The reference interval based on a single study might not be applicable to a broader population. Meta-analysis can provide a more generalizable reference interval based on the combined population by synthesizing results from multiple studies. However, the assumptions of normally distributed underlying study-specific means and equal within-study variances, which are commonly used in existing methods, are strong and may not hold in practice. We propose a Bayesian nonparametric model with more flexible assumptions to extend random effects meta-analysis for estimating reference intervals. We illustrate through simulation studies and two real data examples the performance of our proposed approach when the assumptions of normally distributed study means and equal within-study variances do not hold.
Collapse
Affiliation(s)
- Wenhao Cao
- Division of Biostatistics and Health Data Science, University of Minnesota, Minneapolis, Minnesota, USA
| | - Haitao Chu
- Division of Biostatistics and Health Data Science, University of Minnesota, Minneapolis, Minnesota, USA
- Statistical Research and Data Science Center, Pfizer Inc., New York, New York, USA
| | - Timothy Hanson
- Enterprise CRMS, Medtronic Plc, Mounds View, Minnesota, USA
| | - Lianne Siegel
- Division of Biostatistics and Health Data Science, University of Minnesota, Minneapolis, Minnesota, USA
| |
Collapse
|
4
|
Ursino M, Zohar S. Discussion on "Bayesian meta-analysis of penetrance for cancer risk" by Thanthirige Lakshika M. Ruberu, Danielle Braun, Giovanni Parmigiani, and Swati Biswas. Biometrics 2024; 80:ujae043. [PMID: 38819315 DOI: 10.1093/biomtc/ujae043] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 01/23/2024] [Accepted: 05/24/2024] [Indexed: 06/01/2024]
Abstract
We congratulate the authors for the new meta-analysis model that accounts for different outcomes. We discuss the modeling choice and the Bayesian setting, specifically, we point out the connection between the Bayesian hierarchical model and a mixed-effect model formulation to subsequently discuss possible future method extensions.
Collapse
Affiliation(s)
- Moreno Ursino
- Inserm, Université Paris Cité, Sorbonne Université, Centre de Recherche des Cordeliers, F-75006, Paris, France
- HeKA, Inria Paris, F-75012 Paris, France
| | - Sarah Zohar
- Inserm, Université Paris Cité, Sorbonne Université, Centre de Recherche des Cordeliers, F-75006, Paris, France
- HeKA, Inria Paris, F-75012 Paris, France
| |
Collapse
|
5
|
Liu Z, Al Amer FM, Xiao M, Xu C, Furuya-Kanamori L, Hong H, Siegel L, Lin L. The normality assumption on between-study random effects was questionable in a considerable number of Cochrane meta-analyses. BMC Med 2023; 21:112. [PMID: 36978059 PMCID: PMC10053115 DOI: 10.1186/s12916-023-02823-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Accepted: 03/09/2023] [Indexed: 03/30/2023] Open
Abstract
BACKGROUND Studies included in a meta-analysis are often heterogeneous. The traditional random-effects models assume their true effects to follow a normal distribution, while it is unclear if this critical assumption is practical. Violations of this between-study normality assumption could lead to problematic meta-analytical conclusions. We aimed to empirically examine if this assumption is valid in published meta-analyses. METHODS In this cross-sectional study, we collected meta-analyses available in the Cochrane Library with at least 10 studies and with between-study variance estimates > 0. For each extracted meta-analysis, we performed the Shapiro-Wilk (SW) test to quantitatively assess the between-study normality assumption. For binary outcomes, we assessed between-study normality for odds ratios (ORs), relative risks (RRs), and risk differences (RDs). Subgroup analyses based on sample sizes and event rates were used to rule out the potential confounders. In addition, we obtained the quantile-quantile (Q-Q) plot of study-specific standardized residuals for visually assessing between-study normality. RESULTS Based on 4234 eligible meta-analyses with binary outcomes and 3433 with non-binary outcomes, the proportion of meta-analyses that had statistically significant non-normality varied from 15.1 to 26.2%. RDs and non-binary outcomes led to more frequent non-normality issues than ORs and RRs. For binary outcomes, the between-study non-normality was more frequently found in meta-analyses with larger sample sizes and event rates away from 0 and 100%. The agreements of assessing the normality between two independent researchers based on Q-Q plots were fair or moderate. CONCLUSIONS The between-study normality assumption is commonly violated in Cochrane meta-analyses. This assumption should be routinely assessed when performing a meta-analysis. When it may not hold, alternative meta-analysis methods that do not make this assumption should be considered.
Collapse
Affiliation(s)
- Ziyu Liu
- Department of Statistics, Florida State University, Tallahassee, FL, USA
| | - Fahad M Al Amer
- Department of Mathematics, College of Science and Arts, Najran University, Najran, Saudi Arabia
| | - Mengli Xiao
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO, USA
| | - Chang Xu
- Ministry of Education Key Laboratory for Population Health Across-Life Cycle & Anhui Provincial Key Laboratory of Population Health and Aristogenics, Anhui Medical University, Anhui, China
- School of Public Health, Anhui Medical University, Anhui, China
| | - Luis Furuya-Kanamori
- UQ Centre for Clinical Research, Faculty of Medicine, University of Queensland, Herston, Australia
| | - Hwanhee Hong
- Department of Biostatistics and Bioinformatics, School of Medicine, Duke University, Durham, NC, USA
| | - Lianne Siegel
- Division of Biostatistics, University of Minnesota School of Public Health, Minneapolis, MN, USA
| | - Lifeng Lin
- Department of Epidemiology and Biostatistics, Mel and Enid Zuckerman College of Public Health, University of Arizona, Tucson, AZ, USA.
| |
Collapse
|
6
|
Abstract
Traumatic injuries account for 10% of all mortalities in the United States. Globally, it is estimated that by the year 2030, 2.2 billion people will be overweight (BMI ≥ 25) and 1.1 billion people will be obese (BMI ≥ 30). Obesity is a known risk factor for suboptimal outcomes in trauma; however, the extent of this impact after blunt trauma remains to be determined. The incidence, prevalence, and mortality rates from blunt trauma by age, gender, cause, BMI, year, and geography were abstracted using datasets from 1) the Global Burden of Disease group 2) the United States Nationwide Inpatient Sample databank 3) two regional Level II trauma centers. Statistical analyses, correlations, and comparisons were made on a global, national, and state level using these databases to determine the impact of BMI on blunt trauma. The incidence of blunt trauma secondary to falls increased at global, national, and state levels during our study period from 1990 to 2015, with a corresponding increase in BMI at all levels ( P < 0.05). Mortality due to fall injuries was higher in obese patients at all levels ( P < 0.05). Analysis from Nationwide Inpatient Sample database demonstrated higher mortality rates for obese patients nationally, both after motor vehicle collisions and mechanical falls ( P < 0.05). In obese and nonobese patients, regional data demonstrated a higher blunt trauma mortality rate of 2.4% versus 1.2%, respectively ( P < 0.05) and a longer hospital length of stay of 4.13 versus 3.26 days, respectively ( P = 0.018). The obesity rate and incidence of blunt trauma secondary to falls are increasing, with a higher mortality rate and longer length of stay in obese blunt trauma patients.
Collapse
|
7
|
DesBiens M, Scalia P, Ravikumar S, Glick A, Newton H, Erinne O, Riblet N. A Closer Look at Penicillin Allergy History: Systematic Review and Meta-Analysis of Tolerance to Drug Challenge. Am J Med 2020; 133:452-462.e4. [PMID: 31647915 DOI: 10.1016/j.amjmed.2019.09.017] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/13/2019] [Revised: 09/07/2019] [Accepted: 09/10/2019] [Indexed: 11/27/2022]
Abstract
BACKGROUND True allergy to penicillin is rare, despite the high frequency with which it is reported. While most patients reporting penicillin allergy are not prone to anaphylaxis, it is not currently known what percentage of these patients will tolerate dose challenges of penicillin-based antibiotics. This review aims to determine the rate of tolerance in patients reporting penicillin allergy when challenged with penicillin-based antibiotics. METHODS We searched MedLine, Embase, and Cochrane Library for publications with English language translations between the years 2000 and 2017. We included randomized controlled trials, quasi-experimental, and observational studies of participants reporting penicillin allergy who received at least one systemic dose of a penicillin in the form of a drug challenge. At least 2 independent reviewers extracted data from included studies and assessed the quality of each included study. To generate primary outcome data, we calculated a summary estimate rate of penicillin tolerance from a pooled proportion of participants receiving penicillin with no adverse effects. RESULTS Initial literature search yielded 2945 studies, of which 23 studies were ultimately included in our review; 5056 study participants with reported history of penicillin allergy were challenged with a penicillin. After weighting for study sample size, a pooled average of 94.4% (95% confidence interval, 93.7%-95%) of participants tolerated the dose challenge without any adverse reaction. CONCLUSION Misrepresented penicillin allergy drives unnecessary use of alternative antibiotics, which may be less effective, more toxic, and more expensive than using penicillin. In addressing the problem of penicillin allergy over-diagnosis, evaluation should go beyond risk for type 1 hypersensitivity. Our data suggest that 94.4% of 5056 participants with reported penicillin allergy determined to be clinically appropriate for allergy evaluation tolerated repeat administration of penicillin-based antibiotics without any adverse reactions. This review generates meaningful information useful to clinical predictive analytics, in evaluating and managing patients with a reported history of penicillin allergy.
Collapse
Affiliation(s)
- Martha DesBiens
- The Dartmouth Institute for Health Policy and Clinical Practice, Lebanon, NH.
| | - Peter Scalia
- The Dartmouth Institute for Health Policy and Clinical Practice, Lebanon, NH
| | - Saiganesh Ravikumar
- The Dartmouth Institute for Health Policy and Clinical Practice, Lebanon, NH
| | - Andrew Glick
- The Dartmouth Institute for Health Policy and Clinical Practice, Lebanon, NH
| | - Helen Newton
- The Dartmouth Institute for Health Policy and Clinical Practice, Lebanon, NH
| | - Okechukwu Erinne
- The Dartmouth Institute for Health Policy and Clinical Practice, Lebanon, NH
| | - Natalie Riblet
- The Dartmouth Institute for Health Policy and Clinical Practice, Lebanon, NH
| |
Collapse
|
8
|
Mikolajewicz N, Komarova SV. Meta-Analytic Methodology for Basic Research: A Practical Guide. Front Physiol 2019; 10:203. [PMID: 30971933 PMCID: PMC6445886 DOI: 10.3389/fphys.2019.00203] [Citation(s) in RCA: 84] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2017] [Accepted: 02/15/2019] [Indexed: 02/01/2023] Open
Abstract
Basic life science literature is rich with information, however methodically quantitative attempts to organize this information are rare. Unlike clinical research, where consolidation efforts are facilitated by systematic review and meta-analysis, the basic sciences seldom use such rigorous quantitative methods. The goal of this study is to present a brief theoretical foundation, computational resources and workflow outline along with a working example for performing systematic or rapid reviews of basic research followed by meta-analysis. Conventional meta-analytic techniques are extended to accommodate methods and practices found in basic research. Emphasis is placed on handling heterogeneity that is inherently prevalent in studies that use diverse experimental designs and models. We introduce MetaLab, a meta-analytic toolbox developed in MATLAB R2016b which implements the methods described in this methodology and is provided for researchers and statisticians at Git repository (https://github.com/NMikolajewicz/MetaLab). Through the course of the manuscript, a rapid review of intracellular ATP concentrations in osteoblasts is used as an example to demonstrate workflow, intermediate and final outcomes of basic research meta-analyses. In addition, the features pertaining to larger datasets are illustrated with a systematic review of mechanically-stimulated ATP release kinetics in mammalian cells. We discuss the criteria required to ensure outcome validity, as well as exploratory methods to identify influential experimental and biological factors. Thus, meta-analyses provide informed estimates for biological outcomes and the range of their variability, which are critical for the hypothesis generation and evidence-driven design of translational studies, as well as development of computational models.
Collapse
Affiliation(s)
- Nicholas Mikolajewicz
- Faculty of Dentistry, McGill University, Montreal, QC, Canada
- Shriners Hospital for Children-Canada, Montreal, QC, Canada
| | - Svetlana V. Komarova
- Faculty of Dentistry, McGill University, Montreal, QC, Canada
- Shriners Hospital for Children-Canada, Montreal, QC, Canada
| |
Collapse
|
9
|
Barry R, Modarresi M, Duran R, Denning D, Wilson S, Thompson E, Sanabria J. The Impact of Obesity on Outcomes in Geriatric Blunt Trauma. Am Surg 2019. [DOI: 10.1177/000313481908500227] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Blunt trauma is poorly tolerated in the elderly, and the degree to which obesity, a known risk factor for suboptimal outcomes in trauma affects this population remains to be determined. The incidence, prevalence, and mortality rates of blunt trauma by demographics, year, and geography were found using datasets from both the Global Burden of Disease database, and a Regional Level II trauma registry. Global Burden of Disease data were extracted from 284 country-year and 976 subnational-year combinations from 27 countries for the period 1990 to 2015. The regional trauma registry was interrogated for patients ≥70 years admitted with blunt trauma between 2014 and 2016. The incidence of elderly blunt trauma from falls increased at a global, national (United States), and state (WV) level from 1990 to 2015 by 78.3 per cent, 54.7 per cent, and 42.7 per cent, respectively with concomitant increases in mortality rates of 5.7 per cent, 102.6 per cent, and 89.3 per cent (P < 0.05). The regional cohort had a statistically similar mortality (obese, n = 320 vs nonobese, n = 926 of 4.8% vs 4.4%, respectively, P > 0.05). The hospital length-of-stay, Glasgow Coma Scale score, and systolic blood pressure on presentation were similar (P > 0.05) as was the Injury Severity Score. Major medical comorbidities were identified in 280 (87.5%) and 783 (84.6%) patients in the obese and nonobese groups, respectively. Blunt trauma, secondary to falls, has increased in elderly patients at a global, national, and state level with a concomitant increase in mortality rates. Although a similar increase in the incidence of blunt trauma in the elderly was noted at a regional center, its mortality has not been increased by obesity, possibly because of similar comorbidity rates.
Collapse
Affiliation(s)
| | | | | | | | | | | | - Juan Sanabria
- Department of Surgery, and
- The Marshall Institute for Interdisciplinary Research (MIIR), Marshall University Joan Edwards School of Medicine, Huntington, West Virginia
- The Global Burden of Disease Collaborator Study at the Institute of Human Metrics and Evaluation, University of Washington, Seattle, Washington
| |
Collapse
|
10
|
Li Y, Lord-Bessen J, Shiyko M, Loeb R. Bayesian Latent Class Analysis Tutorial. MULTIVARIATE BEHAVIORAL RESEARCH 2018; 53:430-451. [PMID: 29424559 PMCID: PMC6364555 DOI: 10.1080/00273171.2018.1428892] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This article is a how-to guide on Bayesian computation using Gibbs sampling, demonstrated in the context of Latent Class Analysis (LCA). It is written for students in quantitative psychology or related fields who have a working knowledge of Bayes Theorem and conditional probability and have experience in writing computer programs in the statistical language R . The overall goals are to provide an accessible and self-contained tutorial, along with a practical computation tool. We begin with how Bayesian computation is typically described in academic articles. Technical difficulties are addressed by a hypothetical, worked-out example. We show how Bayesian computation can be broken down into a series of simpler calculations, which can then be assembled together to complete a computationally more complex model. The details are described much more explicitly than what is typically available in elementary introductions to Bayesian modeling so that readers are not overwhelmed by the mathematics. Moreover, the provided computer program shows how Bayesian LCA can be implemented with relative ease. The computer program is then applied in a large, real-world data set and explained line-by-line. We outline the general steps in how to extend these considerations to other methodological applications. We conclude with suggestions for further readings.
Collapse
Affiliation(s)
- Yuelin Li
- Department of Psychiatry & Behavioral Sciences, Memorial Sloan Kettering Cancer Center
| | | | - Mariya Shiyko
- Department of Applied Psychology, Northeastern University
| | - Rebecca Loeb
- Department of Psychiatry & Behavioral Sciences, Memorial Sloan Kettering Cancer Center
| |
Collapse
|
11
|
Talbott E, Karabatsos G, Zurheide JL. Informant similarities, twin studies, and the assessment of externalizing behavior: A meta-analysis. J Sch Psychol 2018; 67:31-55. [PMID: 29571534 DOI: 10.1016/j.jsp.2017.09.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2015] [Revised: 05/18/2017] [Accepted: 09/22/2017] [Indexed: 10/18/2022]
Abstract
The purpose of this study was to examine similarity within informant ratings of the externalizing behavior of monozygotic (MZ) and dizygotic (DZ) twin pairs. To do this, we conducted a meta-analysis of correlations within ratings completed by mothers, fathers, teachers, and youth. We retrieved n=204 correlations for MZ twins and n=267 correlations for DZ twins from n=54 studies containing n=55 samples. Results indicated that all four informants were significant negative predictors of within-informant correlations in their ratings of MZ, but not DZ twins. In the case of longitudinal studies and as the age of MZ twins increased, similarity within ratings by mothers was significantly greater than similarity within ratings by fathers. Among participant characteristics, we found that (a) age was a significant negative predictor of similarity within ratings for MZ twins; (b) race was a significant predictor of similarity within ratings for both MZ and DZ twins, but in the opposite direction; and (c) DZ opposite sex twins were a significant negative predictor of within-rating similarity. Among study characteristics for MZ twins, participant group and longitudinal study were significant negative predictors of within-rating similarity, and for both MZ and DZ twin pairs, non-independence in the data was a significant negative predictor of within-rating similarity. For DZ twins, multiple informants were significant positive predictors of within-rating similarity, and in longitudinal studies with DZ twins, similarity within ratings by mothers was significantly greater than similarity within ratings by fathers, and similarity within ratings by fathers was significantly less than similarity within ratings by teachers and youth. For both MZ and DZ twins, the following study characteristics were significant positive predictors of similarity within ratings: study group, number of time points, and multiple constructs. All four informants appeared equally skilled at predicting within-informant correlations for MZ (but not DZ) twins, with participant characteristics having different predictive effects for MZ compared to DZ twins, and study characteristics having comparable predictive effects for both twin types. Overall, these findings suggest effective discrimination on the part of four informants who rated the externalizing behavior of MZ and DZ twins.
Collapse
|
12
|
Karabatsos G. A Bayesian nonparametric test of significance chasing biases. Res Synth Methods 2017; 9:51-61. [PMID: 28985020 DOI: 10.1002/jrsm.1269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 08/21/2017] [Accepted: 09/18/2017] [Indexed: 11/07/2022]
Abstract
There is a growing concern that much of the published research literature is distorted by the pursuit of statistically significant results. In a seminal article, Ioannidis and Trikalinos (2007, Clinical Trials) proposed an omnibus (I&T) test for significance chasing (SC) biases. This test compares the observed number of studies that report statistically significant results, against their expected number based on study power, assuming a common effect size across studies. The current article extends this approach by developing a Bayesian nonparametric (BNP) meta-regression model and test of SC bias, which can diagnose bias at the individual study level. This new BNP test is based on a flexible model of the predictive distribution of study power, conditionally on study-level covariates which account for study diversity, including diversity due to heterogeneous effect sizes across studies. A test of SC bias proceeds by comparing each study's significant outcome report indicator against its estimated posterior predictive distribution of study power, conditionally on the study's covariates. The BNP model and SC bias test are illustrated through the analyses of 3 meta-analytic data sets and through a simulation study. Software code for the BNP model and test, and the data sets, are provided as Supporting Information.
Collapse
Affiliation(s)
- George Karabatsos
- Departments of Educational Psychology, and Mathematics, Statistics, and by courtesy, Computer Science, University of Illinois at Chicago, Chicago, IL 60607, USA
| |
Collapse
|
13
|
Ockleford C, Adriaanse P, Berny P, Brock T, Duquesne S, Grilli S, Hougaard S, Klein M, Kuhl T, Laskowski R, Machera K, Pelkonen O, Pieper S, Smith R, Stemmer M, Sundh I, Teodorovic I, Tiktak A, Topping CJ, Wolterink G, Bottai M, Halldorsson T, Hamey P, Rambourg MO, Tzoulaki I, Court Marques D, Crivellente F, Deluyker H, Hernandez-Jerez AF. Scientific Opinion of the PPR Panel on the follow-up of the findings of the External Scientific Report 'Literature review of epidemiological studies linking exposure to pesticides and health effects'. EFSA J 2017; 15:e05007. [PMID: 32625302 PMCID: PMC7009847 DOI: 10.2903/j.efsa.2017.5007] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
In 2013, EFSA published a comprehensive systematic review of epidemiological studies published from 2006 to 2012 investigating the association between pesticide exposure and many health outcomes. Despite the considerable amount of epidemiological information available, the quality of much of this evidence was rather low and many limitations likely affect the results so firm conclusions cannot be drawn. Studies that do not meet the 'recognised standards' mentioned in the Regulation (EU) No 1107/2009 are thus not suited for risk assessment. In this Scientific Opinion, the EFSA Panel on Plant Protection Products and their residues (PPR Panel) was requested to assess the methodological limitations of pesticide epidemiology studies and found that poor exposure characterisation primarily defined the major limitation. Frequent use of case-control studies as opposed to prospective studies was considered another limitation. Inadequate definition or deficiencies in health outcomes need to be avoided and reporting of findings could be improved in some cases. The PPR Panel proposed recommendations on how to improve the quality and reliability of pesticide epidemiology studies to overcome these limitations and to facilitate an appropriate use for risk assessment. The Panel recommended the conduct of systematic reviews and meta-analysis, where appropriate, of pesticide observational studies as useful methodology to understand the potential hazards of pesticides, exposure scenarios and methods for assessing exposure, exposure-response characterisation and risk characterisation. Finally, the PPR Panel proposed a methodological approach to integrate and weight multiple lines of evidence, including epidemiological data, for pesticide risk assessment. Biological plausibility can contribute to establishing causation.
Collapse
|
14
|
A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation. Behav Res Methods 2017; 49:335-362. [PMID: 26956682 DOI: 10.3758/s13428-016-0711-7] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
Collapse
|
15
|
Partlett C, Riley RD. Random effects meta-analysis: Coverage performance of 95% confidence and prediction intervals following REML estimation. Stat Med 2016; 36:301-317. [PMID: 27714841 PMCID: PMC5157768 DOI: 10.1002/sim.7140] [Citation(s) in RCA: 122] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2016] [Revised: 09/08/2016] [Accepted: 09/09/2016] [Indexed: 11/06/2022]
Abstract
A random effects meta‐analysis combines the results of several independent studies to summarise the evidence about a particular measure of interest, such as a treatment effect. The approach allows for unexplained between‐study heterogeneity in the true treatment effect by incorporating random study effects about the overall mean. The variance of the mean effect estimate is conventionally calculated by assuming that the between study variance is known; however, it has been demonstrated that this approach may be inappropriate, especially when there are few studies. Alternative methods that aim to account for this uncertainty, such as Hartung–Knapp, Sidik–Jonkman and Kenward–Roger, have been proposed and shown to improve upon the conventional approach in some situations. In this paper, we use a simulation study to examine the performance of several of these methods in terms of the coverage of the 95% confidence and prediction intervals derived from a random effects meta‐analysis estimated using restricted maximum likelihood. We show that, in terms of the confidence intervals, the Hartung–Knapp correction performs well across a wide‐range of scenarios and outperforms other methods when heterogeneity was large and/or study sizes were similar. However, the coverage of the Hartung–Knapp method is slightly too low when the heterogeneity is low (I2 < 30%) and the study sizes are quite varied. In terms of prediction intervals, the conventional approach is only valid when heterogeneity is large (I2 > 30%) and study sizes are similar. In other situations, especially when heterogeneity is small and the study sizes are quite varied, the coverage is far too low and could not be consistently improved by either increasing the number of studies, altering the degrees of freedom or using variance inflation methods. Therefore, researchers should be cautious in deriving 95% prediction intervals following a frequentist random‐effects meta‐analysis until a more reliable solution is identified. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- Christopher Partlett
- National Perinatal Epidemiology Unit, Oxford, U.K.,University of Birmingham, Birmingham, U.K
| | - Richard D Riley
- Research Institute for Primary Care and Health Sciences, Keele University, Keele, U.K
| |
Collapse
|