1
|
Russo L, Siena LM, Farina S, Pastorino R, Boccia S, Ioannidis JPA. High-impact trials with genetic and -omics information focus on cancer mutations, are industry-funded, and less transparent. J Clin Epidemiol 2025; 180:111676. [PMID: 39826627 DOI: 10.1016/j.jclinepi.2025.111676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2024] [Revised: 12/20/2024] [Accepted: 01/07/2025] [Indexed: 01/22/2025]
Abstract
OBJECTIVES To assess how genetics and -omics information is used in the most cited recent clinical trials and to evaluate industry involvement and transparency patterns. STUDY DESIGN AND SETTING This is a meta-research evaluation using a previously constructed database of the 600 most cited clinical trials published from 2019 to 2022. Trials that utilized genetic or -omics characterization of participants in the trial design, analysis, and results were considered eligible. RESULTS 132 (22%) trials used genetic or -omics information, predominantly for detection of cancer mutations (n = 101). Utilization included eligibility criteria (n = 59), subgroup analysis (n = 82), and stratification factor in randomization (n = 14). Authors addressed the relevance in the conclusions in 82 studies (62%). 102 studies (77%) provided data availability statements and six had data already available. Most studies had industry funding (n = 111 [84.0%]). Oncology trials were more likely to be industry-funded (90.1% vs 64.5%, P = .001), to have industry-affiliated analysts (43.6% vs 22.6%, P = .036), and to favor industry-sponsored interventions (83.2% vs 58.1% P = .004). When compared to other trials, genetic and -omics trials were more likely to be funded by industry (84% vs 63.9%, P < .001) and tended to be less likely to have full protocols (P = .018) and statistical plans (P = .04) available. CONCLUSION Our study highlights the current underutilization of genetic and -omics technologies beyond testing for cancer mutations. Industry involvement in these trials appears to be more substantial and transparency is more limited, raising concerns about potential bias.
Collapse
Affiliation(s)
- Luigi Russo
- Section of Hygiene, Department of Life Sciences and Public Health, Universita Cattolica del Sacro Cuore, Rome, Italy
| | - Leonardo M Siena
- Department of Public Health and Infectious Diseases, Sapienza University of Rome, Rome, Italy
| | - Sara Farina
- Section of Hygiene, Department of Life Sciences and Public Health, Universita Cattolica del Sacro Cuore, Rome, Italy
| | - Roberta Pastorino
- Section of Hygiene, Department of Life Sciences and Public Health, Universita Cattolica del Sacro Cuore, Rome, Italy; Department of Woman and Child Health and Public Health, Fondazione Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Stefania Boccia
- Section of Hygiene, Department of Life Sciences and Public Health, Universita Cattolica del Sacro Cuore, Rome, Italy; Department of Public Health and Infectious Diseases, Sapienza University of Rome, Rome, Italy
| | - John P A Ioannidis
- Stanford Prevention Research Center, Department of Medicine, Stanford University School of Medicine, Stanford, CA, USA; Department of Epidemiology and Population Health, Stanford University School of Medicine, Stanford, CA, USA; Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, USA; Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA, USA.
| |
Collapse
|
2
|
Hall SS, Juszczak E, Birchenall M, Elbourne D, Beller E, Chan AW, Little P, Montgomery AA, Kahan BC. Mixed-methods study to develop extensions to the SPIRIT and CONSORT statements for factorial randomised trials: the Reporting Factorial Trials (RAFT) study. BMJ Open 2025; 15:e082917. [PMID: 39961717 PMCID: PMC11836851 DOI: 10.1136/bmjopen-2023-082917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 01/21/2025] [Indexed: 02/21/2025] Open
Abstract
BACKGROUND Extensions to Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) and Consolidated Standards of Reporting Trials (CONSORT) reporting recommendations specifically for factorial trials have been developed by the Reporting Factorial Trials (RAFT) study group. This article describes the processes and methods used to develop the extensions. OBJECTIVE To develop SPIRIT and CONSORT extensions for factorial trials. DESIGN AND PARTICIPANTS A four-phase, consensus-based approach was used: phase 1: scoping review, phase 2: Delphi survey (n=104 respondents in round 1), phase 3: consensus meeting (n=15 members) and phase 4: checklist finalisation. RESULTS In phase 1, the scoping review identified 31 reporting recommendations, which formed a long list of 50 concepts (19 applied to the SPIRIT extension and 31 applied to the CONSORT extension) to include in the guideline development. In phase 2, a three-round Delphi survey resulted in two new concepts being added and ended with 49 concepts (19 applied to SPIRIT and 30 applied to CONSORT) reaching consensus to remain, with only three concepts meeting the exclusion criteria. In phase 3, the concepts were further refined and translated into specific extension item wording, through an extensive review process conducted by the core RAFT team and leading trial experts, who attended a 2-day hybrid meeting. The resulting 9 SPIRIT items and 17 CONSORT items were further evaluated and developed through an iterative process in phase 4, to promote user acceptance and uptake. CONCLUSION Uptake of the CONSORT and SPIRIT extensions will improve the conduct of factorial trials, as well as understanding and interpretation of such trials. By reporting on how these extensions were developed, we promote transparency of this process and share learning experiences to develop best practice when developing reporting guidelines.
Collapse
Affiliation(s)
- Sophie S Hall
- Nottingham Clinical Trials Unit, School of Medicine, University of Nottingham, Nottingham, UK
| | - Edmund Juszczak
- Nottingham Clinical Trials Unit, School of Medicine, University of Nottingham, Nottingham, UK
| | - Megan Birchenall
- Nottingham Clinical Trials Unit, School of Medicine, University of Nottingham, Nottingham, UK
| | - Diana Elbourne
- Faculty of Epidemiology and Population Health, The London School of Hygiene & Tropical Medicine, London, UK
| | - Elaine Beller
- Centre for Research in Evidence Based Practice, Bond University, Gold Coast, Queensland, Australia
| | - An-Wen Chan
- Women's College Research Institute, University of Toronto, Toronto, Ontario, Canada
| | - Paul Little
- Primary Care and Population Sciences, University of Southampton, Southampton, UK
| | - Alan A Montgomery
- Nottingham Clinical Trials Unit, School of Medicine, University of Nottingham, Nottingham, UK
| | | |
Collapse
|
3
|
Thompson JY, Shaw J, Watson SI, Wang Y, Robinson C, Taljaard M, Hemming K. Review of the quality of reporting of statistical analysis plans for cluster randomized trials. J Clin Epidemiol 2025; 181:111726. [PMID: 39961476 DOI: 10.1016/j.jclinepi.2025.111726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2024] [Revised: 02/10/2025] [Accepted: 02/11/2025] [Indexed: 03/10/2025]
Abstract
BACKGROUND AND OBJECTIVES The guideline for the content of Statistical Analysis Plans (SAPs) outlines recommendations for items to be included in SAPs. As yet there is no specific tailoring of this guideline for Cluster Randomized Trials (CRTs). There has also been no assessment of reporting quality of SAPs against this guideline. Our intention is to identify how well a sample of SAPs for CRTs are adhering to the reporting of key items in the current guidelines, as well as additional analysis aspects considered to be important in CRTs. METHODS We include (i) fully published standalone SAPs identified via Ovid-MEDLINE and (ii) SAPs published as supplementary material or appendices to the final published report identified by searching an existing database of nearly 800 CRTs. RESULTS The search identified 85 unique SAPs: 26 were published in standalone format and 59 were supplementary material to the full trial report. There was mixed clarity in reporting of items related to the current guideline (eg, most (61/85, 72%) reported what covariates will be included in any adjustment; but fewer (26/85, 31%) reported what method will be used to estimate the absolute measure of effect). Considering additional aspects important for CRTs, the majority (79/85, 93%) included a plan to allow for clustering in the analysis; but fewer (10/40, 25%) reported how a small number of clusters would be accommodated (this was only considered relevant for the subset of CRTs with fewer than 40 clusters). Few (5/85, 6%) reported how the intracluster correlation would be estimated. Few clearly reported statistical targets of inference: in only two SAPs (2/85, 2%) it was clear whether the objectives were related to the individual or cluster-level average; in trials where relevant, only three (3/70, 4%) clearly reported whether the objectives were related to the marginal or cluster-specific effect. CONCLUSION This review has identified specific areas of poor quality of reporting that might need additional consideration when developing the guidance for the reporting of SAPs for CRTs.
Collapse
Affiliation(s)
| | - Julia Shaw
- Methodological and Implementation Research Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, 1053 Carling Avenue, Ottawa, Canada
| | - Samuel I Watson
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Yixin Wang
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK
| | - Clare Robinson
- Pragmatic Clinical Trials Unit, Centre for Evaluation and Methods, Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| | - Monica Taljaard
- Methodological and Implementation Research Program, Ottawa Hospital Research Institute, Ottawa, Canada; School of Epidemiology and Public Health, University of Ottawa, 1053 Carling Avenue, Ottawa, Canada
| | - Karla Hemming
- Institute of Applied Health Research, University of Birmingham, Birmingham, UK.
| |
Collapse
|
4
|
Boyne N, Duke A, Rea J, Khan A, Young A, Van Vleet J, Vassar M. Discrepancies in safety reporting for chronic back pain clinical trials: an observational study from ClinicalTrials.gov and publications. BMC Med Res Methodol 2025; 25:33. [PMID: 39915715 PMCID: PMC11800428 DOI: 10.1186/s12874-025-02486-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2024] [Accepted: 01/31/2025] [Indexed: 02/11/2025] Open
Abstract
INTRODUCTION Chronic back pain (CBP) is a leading cause of disability worldwide and is commonly managed with pharmacological, non-pharmacological, and procedural interventions. However, adverse event (AE) reporting for these therapies often lacks transparency, raising concerns about the accuracy of safety data. This study aimed to quantify inconsistencies in AE reporting between ClinicalTrials.gov and corresponding randomized controlled trial (RCT) publications, emphasizing the importance of comprehensive safety reporting to improve clinical decision-making and patient care. METHODS We retrospectively analyzed Phase 2-4 CBP RCTs registered on ClinicalTrials.gov from 2009 to 2023. Extracted data included AE reporting, trial sponsorship, and discrepancies in serious adverse events (SAEs), other adverse events (OAEs), mortality, and treatment-related withdrawals between registry entries and publications. Statistical analyses assessed reporting inconsistencies, following STROBE guidelines. RESULTS A total of 114 registered trials were identified, with 40 (35.1%) corresponding publications. Among these, 67.5% were industry-sponsored. Only 4 (10%) publications fully reported adverse events (AEs) without discrepancies, while 36 (90%) contained at least one inconsistency compared to ClinicalTrials.gov. Discontinuation due to AEs was explicitly reported in 24 (60%) of ClinicalTrials.gov entries and in 30 (75%) of publications, with discrepancies in 16 trials (40%). Serious adverse events (SAEs) were reported differently in 15 (37.5%) publications; 80% reported fewer SAEs than ClinicalTrials.gov. Other adverse events (OAEs) showed discrepancies in 37 (92.5%) publications, with 43.2% reporting fewer and 54.1% reporting more OAEs. DISCUSSION This study highlights pervasive discrepancies in AE reporting for CBP trials, undermining the reliability of published safety data. Inconsistent reporting poses risks to clinical decision-making and patient safety. Adopting standardized reporting guidelines, such as CONSORT Harms, and ensuring transparent updates in publications could enhance the accuracy and trustworthiness of safety data. Journals and regulatory bodies should enforce compliance and future efforts should develop mechanisms to monitor and correct reporting inconsistencies, enhancing the trustworthiness of safety data in clinical research.
Collapse
Affiliation(s)
- Nick Boyne
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| | - Alison Duke
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| | - Jack Rea
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| | - Adam Khan
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA.
| | - Alec Young
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| | - Jared Van Vleet
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| | - Matt Vassar
- Office of Medical Student Research, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
- Department of Psychiatry and Behavioral Sciences, Oklahoma State University Center for Health Sciences, Tulsa, OK, USA
| |
Collapse
|
5
|
Davis-Stober CP, Sarafoglou A, Aczel B, Chandramouli SH, Errington TM, Field SM, Fishbach A, Freire J, Ioannidis JPA, Oberauer K, Pestilli F, Ressl S, Schad DJ, ter Schure J, Tentori K, van Ravenzwaaij D, Vandekerckhove J, Gundersen OE. How can we make sound replication decisions? Proc Natl Acad Sci U S A 2025; 122:e2401236121. [PMID: 39869811 PMCID: PMC11804638 DOI: 10.1073/pnas.2401236121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2025] Open
Abstract
Replication and the reported crises impacting many fields of research have become a focal point for the sciences. This has led to reforms in publishing, methodological design and reporting, and increased numbers of experimental replications coordinated across many laboratories. While replication is rightly considered an indispensable tool of science, financial resources and researchers' time are quite limited. In this perspective, we examine different values and attitudes that scientists can consider when deciding whether to replicate a finding and how. We offer a conceptual framework for assessing the usefulness of various replication tools, such as preregistration.
Collapse
Affiliation(s)
- Clintin P. Davis-Stober
- Department of Psychological Sciences, University of Missouri, Columbia, MO65211
- University of Missouri Institute for Data Science and Informatics, University of Missouri, Columbia, MO65211
| | - Alexandra Sarafoglou
- Department of Psychology, Psychological Methods Unit, University of Amsterdam, Amsterdam1001NK, The Netherlands
| | - Balazs Aczel
- Department of Affective Psychology, Institute of Psychology, Eotvos Lorand University, Budapest1063, Hungary
| | - Suyog H. Chandramouli
- Department of Information and Computer Engineering, Aalto University, Espoo02150, Finland
- Department of Psychology, Princeton University, Princeton, NJ08544
| | | | - Sarahanne M. Field
- Pedagogical and Educational Sciences, University of Groningen, Groningen9712TJ, The Netherlands
| | - Ayelet Fishbach
- Booth School of Business, University of Chicago, Chicago, IL60637
| | - Juliana Freire
- Department of Computer Science, Tandon School of Engineering, New York University, New York, NY10011
- Center for Data Science, New York University, New York, NY10011
| | - John P. A. Ioannidis
- Department of Medicine, Stanford University, Stanford, CA94305
- Department of Epidemiology and Population Health, Stanford University, Stanford, CA94305
- Department of Biomedical Data Science, Stanford University, Stanford, CA94305
- Department of Meta-Research Innovation Center at Stanford, Stanford University, Stanford, CA94305
| | - Klaus Oberauer
- Department of Psychology, University of Zurich, Zurich8050, Switzerland
| | - Franco Pestilli
- Department of Psychology, University of Texas at Austin, Austin, TX78712
- Department of Neuroscience, University of Texas at Austin, Austin, TX78712
- Center for Learning and Memory, The University of Texas at Austin, Austin, TX78712
| | - Susanne Ressl
- Department of Neuroscience, University of Texas at Austin, Austin, TX78712
| | - Daniel J. Schad
- Institute of Mind, Brain and Behavior, Psychology Department, Health and Medical University, Potsdam14471, Germany
| | - Judith ter Schure
- Department of Epidemiology and Data Science, Amsterdam University Medical Center, Amsterdam1105AZ, The Netherlands
| | - Katya Tentori
- Center for Mind/Brain Sciences, University of Trento, Rovereto38068, Italy
| | - Don van Ravenzwaaij
- Department of Psychology, Psychometrics and Statistics, University of Groningen, Groningen9712TS, The Netherlands
| | - Joachim Vandekerckhove
- Department of Cognitive Sciences, University of California, Irvine, CA92697
- Department of Statistics, University of California, Irvine, CA92697
- Department of Logic & Philosophy, University of California, Irvine, CA92697
| | - Odd Erik Gundersen
- Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim7030, Norway
| |
Collapse
|
6
|
Khan MS, Zarmer LF, Liang J, Saroukhani S, Lucas AR, McCartney CJL, Chaudhry R. Evaluating multiplicity reporting in analgesic clinical trials: An analytical review. Eur J Pain 2025; 29:e4756. [PMID: 39584590 DOI: 10.1002/ejp.4756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 09/30/2024] [Accepted: 10/31/2024] [Indexed: 11/26/2024]
Abstract
BACKGROUND AND OBJECTIVES Analgesia trials often demands multiple comparisons to assess various treatment arms, outcomes, or repeated assessments. These multiple comparisons risk inflating the false positive rate. Multiplicity correction in recent analgesic randomized controlled trials (RCTs) remains unclear despite statistical method advancements and regulatory guidelines. Our study aimed to identify reporting inadequacies in multiple analysis adjustments and explanations to understand these deficiencies. DATABASES AND DATA TREATMENT This review analysed RCTs from the European Journal of Pain, the Journal of Pain, and PAIN, published between January 2018 and December 2022. We included randomized, double-blind trials focusing on pain outcomes. Data extraction, managed by three researchers using predefined criteria, included trial characteristics, multiplicity presence, and correction methods. Descriptive statistical analyses included Fisher's exact, and Holm method for multiple comparisons. RESULTS Out of 112 articles, 48 pre-specified a primary analysis plan. Multiple analyses were observed in 65 articles, with 60% adjusting for all comparisons, primarily using the Bonferroni method. Compared with previous studies, no significant changes in multiplicity correction practices were noted when stratified by trial type, size, and sponsor. CONCLUSIONS The study reveals a persistent reliance on multiple comparisons in analgesic clinical trials without a corresponding increase in multiplicity corrections emphasizing a need for enhanced reporting and implementation of statistical adjustments. We acknowledge limitations in categorizing studies, the use of a surrogate for the trial stage, and sourcing data from journal webpages rather than a database. SIGNIFICANCE STATEMENT This study flags inadequate reporting on multiplicity correction in analgesic trials, stressing the risk of false positives and the urgent need for enhanced reporting to boost reproducibility.
Collapse
Affiliation(s)
- Maaz S Khan
- Banner University Medical Center-Tucson, Tucson, Arizona, USA
- Department of Anesthesiology and Pain Medicine, College of Medicine, University of Arizona, Tucson, Arizona, USA
| | - Lori F Zarmer
- Department of Anesthesiology and Pain Medicine, College of Medicine, University of Arizona, Tucson, Arizona, USA
| | - Jie Liang
- Division of Clinical and Translational Sciences, Department of Internal Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Sepideh Saroukhani
- Division of Clinical and Translational Sciences, Department of Internal Medicine, McGovern Medical School, The University of Texas Health Science Center at Houston, Houston, Texas, USA
- Biostatistics/Epidemiology/Research Design (BERD) Component, Center for Clinical and Translational Sciences (CCTS), The University of Texas Health Science Center at Houston, Houston, Texas, USA
| | - Anthony R Lucas
- Banner University Medical Center-Tucson, Tucson, Arizona, USA
- Department of Anesthesiology and Pain Medicine, College of Medicine, University of Arizona, Tucson, Arizona, USA
| | - Colin J L McCartney
- Department of Anesthesiology and Pain Medicine, College of Medicine, Sunnybrook Health Science Center, University of Toronto, Toronto, Ontario, Canada
| | - Rabail Chaudhry
- Banner University Medical Center-Tucson, Tucson, Arizona, USA
- Department of Anesthesiology and Pain Medicine, College of Medicine, University of Arizona, Tucson, Arizona, USA
| |
Collapse
|
7
|
Papageorgiou SN, Giannakopoulou T, Eliades T, Vandevska-Radunovic V. Occlusal outcome of orthodontic treatment: a systematic review with meta-analyses of randomized trials. Eur J Orthod 2024; 46:cjae060. [PMID: 39607678 PMCID: PMC11602743 DOI: 10.1093/ejo/cjae060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2024]
Abstract
BACKGROUND Several appliances or treatment protocols are marketed to either patients or orthodontists as being associated with improved orthodontic outcomes. However, clinical decision-making should be based on robust scientific evidence and not marketing claims or anecdotal evidence. OBJECTIVE To identify appliances/protocols being associated with improved outcomes of fixed appliance treatment. SEARCH METHODS Unrestricted literature searches in seven databases/registers for human studies until March 2024. SELECTION CRITERIA Randomized or quasi-randomized clinical trials on human patients of any age, sex, or ethnicity receiving comprehensive orthodontic treatment with fixed appliances and assessing occlusal outcome with either the Peer Assessment Rating (PAR) or the American Board of Orthodontics-Objective Grading System (ABO-OGS) index. DATA COLLECTION AND ANALYSIS Duplicate/independent study selection, data extraction, and risk of bias assessment with the Cochrane RoB 2 tool. Random-effects meta-analyses of averages or mean differences with their 95% Confidence Intervals (CI), followed by meta-regression/subgroup/sensitivity analyses and assessment of the quality of clinical recommendations with the Grades of Recommendations, Assessment, Development, and Evaluation (GRADE) approach. RESULTS Data from 20 small- to moderately-sized trials covering 1470 patients indicated that orthodontic treatment with fixed appliances is effective and results on average in a final PAR score of 6.0 points (95% CI 3.9-8.2 points), an absolute PAR reduction of 23.0 points (95% CI 15.6-30.4 points), a % PAR reduction of 82.6% (95% CI 70.8%-94.4%), and an absolute ABO-OGS score of 18.9 points (95% CI 11.7-26.2 points). However, very high between-study heterogeneity (I2 > 75%) was seen for both PAR and ABO-OGS. Extraction treatment was associated with significantly better occlusal outcome than non-extraction treatment with ABO-OGS (12.9 versus 16.6 points; P = .02). There was no statistically significant difference in occlusal outcome with (i) 0.018″-slot or 0.022″-slot brackets; (ii) customized or prefabricated brackets; (iii) anchorage reinforcement with temporary anchorage devices; (iv) use of vibrational adjuncts; and (v) aligners or fixed appliances (P > .05 in all instances), while small benefits were seen with indirectly bonded brackets. CONCLUSIONS Considerable between-study heterogeneity exists in the reported occlusal outcome of fixed appliance treatment, and different appliances or adjuncts have little effect on this. Standardization and/or automatization of the scoring procedures for PAR and ABO-OGS might help to improve consistency and reliability of outcome measurement in orthodontic trials. REGISTRATION PROSPERO (CRD42024525088).
Collapse
Affiliation(s)
- Spyridon N Papageorgiou
- Clinic of Orthodontics and Pediatric Dentistry, Center for Dental Medicine, University of Zurich, Plattenstrasse 11, 8032 Zurich, Switzerland
| | - Theodora Giannakopoulou
- Department of Paediatric Oral Heath and Orthodontics, University Centre for Dental Medicine UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland
| | - Theodore Eliades
- Clinic of Orthodontics and Pediatric Dentistry, Center for Dental Medicine, University of Zurich, Plattenstrasse 11, 8032 Zurich, Switzerland
| | - Vaska Vandevska-Radunovic
- Department of Orthodontics, Institute of Clinical Dentistry, Faculty of Dentistry, University of Oslo, P.O. Box 1072 Blindern, N-0316 Oslo, Norway
| |
Collapse
|
8
|
Siena LM, Papamanolis L, Siebert MJ, Bellomo RK, Ioannidis JPA. Industry Involvement and Transparency in the Most Cited Clinical Trials, 2019-2022. JAMA Netw Open 2023; 6:e2343425. [PMID: 37962883 PMCID: PMC10646728 DOI: 10.1001/jamanetworkopen.2023.43425] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 10/03/2023] [Indexed: 11/15/2023] Open
Abstract
Importance Industry involvement is prominent in influential clinical trials, and commitments to transparency of trials are highly variable. Objective To evaluate the modes of industry involvement and the transparency features of the most cited recent clinical trials across medicine. Design, Setting, and Participants This cross-sectional study was a meta-research assessment including randomized and nonrandomized clinical trials published in 2019 or later. The 600 trials of any type of disease or setting that attracted highest number of citations in Scopus as of December 2022 were selected for analysis. Data were analyzed from March to September 2023. Main Outcomes and Measures Outcomes of interest were industry involvement (sponsor, author, and analyst) and transparency (protocols, statistical analysis plans, and data and code availability). Results Among 600 trials with a median (IQR) sample size of 415 (124-1046) participants assessed, 409 (68.2%) had industry funding and 303 (50.5%) were exclusively industry-funded. A total of 354 trials (59.0%) had industry authors, with 280 trials (46.6%) involving industry analysts and 125 trials (20.8%) analyzed exclusively by industry analysts. Among industry-funded trials, 364 (89.0%) reached conclusions favoring the sponsor. Most trials (478 trials [79.7%]) provided a data availability statement, and most indicated intention to share the data, but only 16 trials (2.7%) had data already readily available to others. More than three-quarters of trials had full protocols (482 trials [82.0%]) or statistical analysis plans (446 trials [74.3%]) available, but only 27 trials (4.5%) explicitly mentioned sharing analysis code (8 readily available; 19 on request). Randomized trials were more likely than nonrandomized studies to involve only industry analysts (107 trials [22.9%] vs 18 trials [13.6%]; P = .02) and to have full protocols (405 studies [86.5%] vs 87 studies [65.9%]; P < .001) and statistical analysis plans (373 studies [79.7%] vs 73 studies [55.3%]; P < .001) available. Almost all nonrandomized industry-funded studies (90 of 92 studies [97.8%]) favored the sponsor. Among industry-funded trials, exclusive industry funding (odds ratio, 2.9; 95% CI, 1.5-5.4) and industry-affiliated authors (odds ratio, 2.9; 95% CI, 1.5-5.6) were associated with favorable conclusions for the sponsor. Conclusions and Relevance This cross-sectional study illustrates how industry involvement in the most influential clinical trials was prominent not only for funding, but also authorship and provision of analysts and was associated with conclusions favoring the sponsor. While most influential trials reported that they planned to share data and make both protocols and statistical analysis plans available, raw data and code were rarely readily available.
Collapse
Affiliation(s)
- Leonardo M. Siena
- Department of Public Health and Infectious Diseases, Sapienza University of Rome, Rome, Italy
- Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California
| | - Lazaros Papamanolis
- Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California
| | - Maximilian J. Siebert
- Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California
| | - Rosa Katia Bellomo
- Department of Public Health and Infectious Diseases, Sapienza University of Rome, Rome, Italy
- Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California
| | - John P. A. Ioannidis
- Meta-Research Innovation Center at Stanford, Stanford University, Stanford, California
- Department of Medicine, Stanford University, Stanford, California
- Department of Epidemiology and Population Health, Stanford University, Stanford, California
- Department of Biomedical Data Science, Stanford University, Stanford, California
- Department of Statistics, Stanford University, Stanford, California
| |
Collapse
|
9
|
Stevens G, Dolley S, Mogg R, Connor JT. A template for the authoring of statistical analysis plans. Contemp Clin Trials Commun 2023; 34:101100. [PMID: 37388218 PMCID: PMC10300078 DOI: 10.1016/j.conctc.2023.101100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 01/06/2023] [Accepted: 03/10/2023] [Indexed: 07/01/2023] Open
Abstract
A number of principal investigators may have limited access to biostatisticians, a lack of biostatistical training, or no requirement to complete a timely statistical analysis plan (SAP). SAPs completed early will identify design or implementation weak points, improve protocols, remove the temptation for p-hacking, and enable proper peer review by stakeholders considering funding the trial. An SAP completed at the same time as the study protocol might be the only comprehensive method for at once optimizing sample size, identifying bias, and applying rigor to study design. This ordered corpus of SAP sections with detailed definitions and a variety of examples represents an omnibus of best practice methods offered by biostatistical practitioners inside and outside of industry. The article presents a protocol template for clinical research design, enabling statisticians, from beginners to advanced.
Collapse
Affiliation(s)
- Gary Stevens
- DynaStat Consulting, Inc., 119 Fairway Court, Bastrop, TX, 78602, USA
| | - Shawn Dolley
- Open Global Health, 710 12th St. South, Suite 2523, Arlington, VA, 22202, USA
| | - Robin Mogg
- Takeda Pharmaceuticals USA Inc., 95 Hayden Avenue, Lexington, MA, 02421, USA
| | - Jason T. Connor
- ConfluenceStat, 3102 NW 82nd Way, Cooper City, Florida, 33024, USA
- University of Central Florida College of Medicine, 6850 Lake Nona Blvd, Orlando, FL, 32827, USA
| |
Collapse
|
10
|
Siebert M, Naudet F, Ioannidis JPA. Peer review before trial conduct could increase research value and reduce waste. J Clin Epidemiol 2023; 160:141-146. [PMID: 37286150 DOI: 10.1016/j.jclinepi.2023.05.024] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 05/24/2023] [Accepted: 05/30/2023] [Indexed: 06/09/2023]
Affiliation(s)
- Maximilian Siebert
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA 94305, USA.
| | - Florian Naudet
- Univ Rennes, CHU Rennes, Inserm, Centre d'investigation clinique de Rennes (CIC1414), service de pharmacologie clinique, Institut de recherche en santé, environnement et travail (Irset), UMR S 1085, EHESP, Rennes 35000, France; Institut Universitaire de France, Paris, France
| | - John P A Ioannidis
- Meta-Research Innovation Center at Stanford (METRICS), Stanford University, Stanford, CA 94305, USA; Departments of Medicine, of Epidemiology, of Biomedical Data Science, and of Statistics, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
11
|
Kahan BC, Cro S, Li F, Harhay MO. Eliminating Ambiguous Treatment Effects Using Estimands. Am J Epidemiol 2023; 192:987-994. [PMID: 36790803 PMCID: PMC10236519 DOI: 10.1093/aje/kwad036] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 02/06/2023] [Accepted: 02/13/2023] [Indexed: 02/16/2023] Open
Abstract
Most reported treatment effects in medical research studies are ambiguously defined, which can lead to misinterpretation of study results. This is because most authors do not attempt to describe what the treatment effect represents, and instead require readers to deduce this based on the reported statistical methods. However, this approach is challenging, because many methods provide counterintuitive results. For example, some methods include data from all patients, yet the resulting treatment effect applies only to a subset of patients, whereas other methods will exclude certain patients while results will apply to everyone. Additionally, some analyses provide estimates pertaining to hypothetical settings in which patients never die or discontinue treatment. Herein we introduce estimands as a solution to the aforementioned problem. An estimand is a clear description of what the treatment effect represents, thus saving readers the necessity of trying to infer this from study methods and potentially getting it wrong. We provide examples of how estimands can remove ambiguity from reported treatment effects and describe their current use in practice. The crux of our argument is that readers should not have to infer what investigators are estimating; they should be told explicitly.
Collapse
Affiliation(s)
- Brennan C Kahan
- Correspondence to Dr. Brennan C. Kahan, MRC Clinical Trials Unit at UCL, University College London, 90 High Holborn, London WC1V 6LJ, United Kingdom (e-mail: )
| | | | | | | |
Collapse
|
12
|
Cro S, Kahan BC, Rehal S, Chis Ster A, Carpenter JR, White IR, Cornelius VR. Evaluating how clear the questions being investigated in randomised trials are: systematic review of estimands. BMJ 2022; 378:e070146. [PMID: 35998928 PMCID: PMC9396446 DOI: 10.1136/bmj-2022-070146] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 06/21/2022] [Indexed: 01/21/2023]
Abstract
OBJECTIVES To evaluate how often the precise research question being addressed about an intervention (the estimand) is stated or can be determined from reported methods, and to identify what types of questions are being investigated in phase 2-4 randomised trials. DESIGN Systematic review of the clarity of research questions being investigated in randomised trials in 2020 in six leading general medical journals. DATA SOURCE PubMed search in February 2021. ELIGIBILITY CRITERIA FOR SELECTING STUDIES Phase 2-4 randomised trials, with no restrictions on medical conditions or interventions. Cluster randomised, crossover, non-inferiority, and equivalence trials were excluded. MAIN OUTCOME MEASURES Number of trials that stated the precise primary question being addressed about an intervention (ie, the primary estimand), or for which the primary estimand could be determined unambiguously from the reported methods using statistical knowledge. Strategies used to handle post-randomisation events that affect the interpretation or existence of patient outcomes, such as intervention discontinuations or uses of additional drug treatments (known as intercurrent events), and the corresponding types of questions being investigated. RESULTS 255 eligible randomised trials were identified. No trials clearly stated all the attributes of the estimand. In 117 (46%) of 255 trials, the primary estimand could be determined from the reported methods. Intercurrent events were reported in 242 (95%) of 255 trials; but the handling of these could only be determined in 125 (49%) of 255 trials. Most trials that provided this information considered the occurrence of intercurrent events as irrelevant in the calculation of the treatment effect and assessed the effect of the intervention regardless (96/125, 77%)-that is, they used a treatment policy strategy. Four (4%) of 99 trials with treatment non-adherence owing to adverse events estimated the treatment effect in a hypothetical setting (ie, the effect as if participants continued treatment despite adverse events), and 19 (79%) of 24 trials where some patients died estimated the treatment effect in a hypothetical setting (ie, the effect as if participants did not die). CONCLUSIONS The precise research question being investigated in most trials is unclear, mainly because of a lack of clarity on the approach to handling intercurrent events. Clear reporting of estimands is necessary in trial reports so that all stakeholders, including clinicians, patients and policy makers, can make fully informed decisions about medical interventions. SYSTEMATIC REVIEW REGISTRATION PROSPERO CRD42021238053.
Collapse
Affiliation(s)
- Suzie Cro
- Imperial Clinical Trials Unit, School of Public Health, Imperial College London, London, UK
| | - Brennan C Kahan
- Medical Research Council Clinical Trials Unit at University College London, London, UK
| | | | | | - James R Carpenter
- Medical Research Council Clinical Trials Unit at University College London, London, UK
- London School of Hygiene and Tropical Medicine, London, UK
| | - Ian R White
- Medical Research Council Clinical Trials Unit at University College London, London, UK
| | - Victoria R Cornelius
- Imperial Clinical Trials Unit, School of Public Health, Imperial College London, London, UK
| |
Collapse
|
13
|
Campbell D, McDonald C, Cro S, Jairath V, Kahan BC. Access to unpublished protocols and statistical analysis plans of randomised trials. Trials 2022; 23:674. [PMID: 35978391 PMCID: PMC9387046 DOI: 10.1186/s13063-022-06641-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 08/06/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Access to protocols and statistical analysis plans (SAPs) increases the transparency of randomised trial by allowing readers to identify and interpret unplanned changes to study methods, however they are often not made publicly available. We sought to determine how often study investigators would share unavailable documents upon request. METHODS We used trials from two previously identified cohorts (cohort 1: 101 trials published in high impact factor journals between January and April of 2018; cohort 2: 100 trials published in June 2018 in journals indexed in PubMed) to determine whether study investigators would share unavailable protocols/SAPs upon request. We emailed corresponding authors of trials with no publicly available protocol or SAP up to four times. RESULTS Overall, 96 of 201 trials (48%) across the two cohorts had no publicly available protocol or SAP (11/101 high-impact cohort, 85/100 PubMed cohort). In total, 8/96 authors (8%) shared some trial documentation (protocol only [n = 5]; protocol and SAP [n = 1]; excerpt from protocol [n = 1]; research ethics application form [n = 1]). We received protocols for 6/96 trials (6%), and a SAP for 1/96 trial (1%). Seventy-three authors (76%) did not respond, 7 authors responded (7%) but declined to share a protocol or SAP, and eight email addresses were invalid (8%). A total of 329 emails were sent (an average of 41 emails for every trial which sent documentation). After emailing authors, the total number of trials with an available protocol increased by only 3%, from 52% in to 55%. CONCLUSIONS Most study investigators did not share their unpublished protocols or SAPs upon direct request. Alternative strategies are needed to increase transparency of randomised trials and ensure access to protocols and SAPs.
Collapse
Affiliation(s)
- David Campbell
- Department of Medicine, Division of Gastroenterology, Western University, London, Ontario, Canada
| | - Cassandra McDonald
- Department of Medicine, Division of Gastroenterology, Western University, London, Ontario, Canada
| | - Suzie Cro
- Imperial Clinical Trials Unit, Imperial College London, London, UK
| | - Vipul Jairath
- Department of Medicine, Division of Gastroenterology, Western University, London, Ontario, Canada
- Department of Epidemiology and Biostatistics, Western University, London, Ontario, Canada
| | | |
Collapse
|
14
|
Kahan BC, Morris TP, White IR, Tweed CD, Cro S, Dahly D, Pham TM, Esmail H, Babiker A, Carpenter JR. Treatment estimands in clinical trials of patients hospitalised for COVID-19: ensuring trials ask the right questions. BMC Med 2020; 18:286. [PMID: 32900372 PMCID: PMC7478913 DOI: 10.1186/s12916-020-01737-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Accepted: 08/06/2020] [Indexed: 12/15/2022] Open
Abstract
When designing a clinical trial, explicitly defining the treatment estimands of interest (that which is to be estimated) can help to clarify trial objectives and ensure the questions being addressed by the trial are clinically meaningful. There are several challenges when defining estimands. Here, we discuss a number of these in the context of trials of treatments for patients hospitalised with COVID-19 and make suggestions for how estimands should be defined for key outcomes. We suggest that treatment effects should usually be measured as differences in proportions (or risk or odds ratios) for outcomes such as death and requirement for ventilation, and differences in means for outcomes such as the number of days ventilated. We further recommend that truncation due to death should be handled differently depending on whether a patient- or resource-focused perspective is taken; for the former, a composite approach should be used, while for the latter, a while-alive approach is preferred. Finally, we suggest that discontinuation of randomised treatment should be handled from a treatment policy perspective, where non-adherence is ignored in the analysis (i.e. intention to treat).
Collapse
Affiliation(s)
| | | | | | | | - Suzie Cro
- Imperial Clinical Trials Unit, Imperial College London, London, UK
| | - Darren Dahly
- HRB Clinical Research Facility Cork, Cork, Ireland
- School of Public Health, University College Cork, Cork, Ireland
| | | | - Hanif Esmail
- MRC Clinical Trials Unit at UCL, London, UK
- Institute for Global Health, University College London, London, UK
| | | | | |
Collapse
|
15
|
How to design a pre-specified statistical analysis approach to limit p-hacking in clinical trials: the Pre-SPEC framework. BMC Med 2020; 18:253. [PMID: 32892743 PMCID: PMC7487509 DOI: 10.1186/s12916-020-01706-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Accepted: 07/13/2020] [Indexed: 12/03/2022] Open
Abstract
Results from clinical trials can be susceptible to bias if investigators choose their analysis approach after seeing trial data, as this can allow them to perform multiple analyses and then choose the method that provides the most favourable result (commonly referred to as 'p-hacking'). Pre-specification of the planned analysis approach is essential to help reduce such bias, as it ensures analytical methods are chosen in advance of seeing the trial data. For this reason, guidelines such as SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) and ICH-E9 (International Conference for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use) require the statistical methods for a trial's primary outcome be pre-specified in the trial protocol. However, pre-specification is only effective if done in a way that does not allow p-hacking. For example, investigators may pre-specify a certain statistical method such as multiple imputation, but give little detail on how it will be implemented. Because there are many different ways to perform multiple imputation, this approach to pre-specification is ineffective, as it still allows investigators to analyse the data in different ways before deciding on a final approach. In this article, we describe a five-point framework (the Pre-SPEC framework) for designing a pre-specified analysis approach that does not allow p-hacking. This framework was designed based on the principles in the SPIRIT and ICH-E9 guidelines and is intended to be used in conjunction with these guidelines to help investigators design the statistical analysis strategy for the trial's primary outcome in the trial protocol.
Collapse
|
16
|
Kahan BC, Ahmad T, Forbes G, Cro S. Public availability and adherence to prespecified statistical analysis approaches was low in published randomized trials. J Clin Epidemiol 2020; 128:29-34. [PMID: 32730852 DOI: 10.1016/j.jclinepi.2020.07.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2020] [Revised: 07/02/2020] [Accepted: 07/23/2020] [Indexed: 11/18/2022]
Abstract
BACKGROUND AND OBJECTIVE Prespecification of statistical methods in clinical trial protocols and statistical analysis plans can help to deter bias from p-hacking but is only effective if the prespecified approach is made available. STUDY DESIGN AND SETTING For 100 randomized trials published in 2018 and indexed in PubMed, we evaluated how often a prespecified statistical analysis approach for the trial's primary outcome was publicly available. For each trial with an available prespecified analysis, we compared this with the trial publication to identify whether there were unexplained discrepancies. RESULTS Only 12 of 100 trials (12%) had a publicly available prespecified analysis approach for their primary outcome; this document was dated before recruitment began for only two trials. Of the 12 trials with an available prespecified analysis approach, 11 (92%) had one or more unexplained discrepancies. Only 4 of 100 trials (4%) stated that the statistician was blinded until the SAP was signed off, and only 10 of 100 (10%) stated the statistician was blinded until the database was locked. CONCLUSION For most published trials, there is insufficient information available to determine whether the results may be subject to p-hacking. Where information was available, there were often unexplained discrepancies between the prespecified and final analysis methods.
Collapse
Affiliation(s)
| | - Tahania Ahmad
- Pragmatic Clinical Trials Unit, Queen Mary University of London, London, UK
| | - Gordon Forbes
- Department of Biostatistics and health informatics, Institute of Psychiatry, Psychology & Neuroscience, Kings College London, London, UK
| | - Suzie Cro
- Imperial Clinical Trials Unit, Imperial College London, London, UK
| |
Collapse
|