1
|
Lee SY. Using Bayesian statistics in confirmatory clinical trials in the regulatory setting: a tutorial review. BMC Med Res Methodol 2024; 24:110. [PMID: 38714936 PMCID: PMC11077897 DOI: 10.1186/s12874-024-02235-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 04/24/2024] [Indexed: 05/12/2024] Open
Abstract
Bayesian statistics plays a pivotal role in advancing medical science by enabling healthcare companies, regulators, and stakeholders to assess the safety and efficacy of new treatments, interventions, and medical procedures. The Bayesian framework offers a unique advantage over the classical framework, especially when incorporating prior information into a new trial with quality external data, such as historical data or another source of co-data. In recent years, there has been a significant increase in regulatory submissions using Bayesian statistics due to its flexibility and ability to provide valuable insights for decision-making, addressing the modern complexity of clinical trials where frequentist trials are inadequate. For regulatory submissions, companies often need to consider the frequentist operating characteristics of the Bayesian analysis strategy, regardless of the design complexity. In particular, the focus is on the frequentist type I error rate and power for all realistic alternatives. This tutorial review aims to provide a comprehensive overview of the use of Bayesian statistics in sample size determination, control of type I error rate, multiplicity adjustments, external data borrowing, etc., in the regulatory environment of clinical trials. Fundamental concepts of Bayesian sample size determination and illustrative examples are provided to serve as a valuable resource for researchers, clinicians, and statisticians seeking to develop more complex and innovative designs.
Collapse
Affiliation(s)
- Se Yoon Lee
- Department of Statistics, Texas A &M University, 3143 TAMU, College Station, TX, 77843, USA.
| |
Collapse
|
2
|
White IR, Pham TM, Quartagno M, Morris TP. How to check a simulation study. Int J Epidemiol 2024; 53:dyad134. [PMID: 37833853 PMCID: PMC10859132 DOI: 10.1093/ije/dyad134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 10/12/2023] [Indexed: 10/15/2023] Open
Abstract
Simulation studies are powerful tools in epidemiology and biostatistics, but they can be hard to conduct successfully. Sometimes unexpected results are obtained. We offer advice on how to check a simulation study when this occurs, and how to design and conduct the study to give results that are easier to check. Simulation studies should be designed to include some settings in which answers are already known. They should be coded in stages, with data-generating mechanisms checked before simulated data are analysed. Results should be explored carefully, with scatterplots of standard error estimates against point estimates surprisingly powerful tools. Failed estimation and outlying estimates should be identified and dealt with by changing data-generating mechanisms or coding realistic hybrid analysis procedures. Finally, we give a series of ideas that have been useful to us in the past for checking unexpected results. Following our advice may help to prevent errors and to improve the quality of published simulation studies.
Collapse
|
3
|
Robertson DS, Choodari-Oskooei B, Dimairo M, Flight L, Pallmann P, Jaki T. Point estimation for adaptive trial designs II: Practical considerations and guidance. Stat Med 2023; 42:2496-2520. [PMID: 37021359 PMCID: PMC7614609 DOI: 10.1002/sim.9734] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 01/20/2023] [Accepted: 03/18/2023] [Indexed: 04/07/2023]
Abstract
In adaptive clinical trials, the conventional end-of-trial point estimate of a treatment effect is prone to bias, that is, a systematic tendency to deviate from its true value. As stated in recent FDA guidance on adaptive designs, it is desirable to report estimates of treatment effects that reduce or remove this bias. However, it may be unclear which of the available estimators are preferable, and their use remains rare in practice. This article is the second in a two-part series that studies the issue of bias in point estimation for adaptive trials. Part I provided a methodological review of approaches to remove or reduce the potential bias in point estimation for adaptive designs. In part II, we discuss how bias can affect standard estimators and assess the negative impact this can have. We review current practice for reporting point estimates and illustrate the computation of different estimators using a real adaptive trial example (including code), which we use as a basis for a simulation study. We show that while on average the values of these estimators can be similar, for a particular trial realization they can give noticeably different values for the estimated treatment effect. Finally, we propose guidelines for researchers around the choice of estimators and the reporting of estimates following an adaptive design. The issue of bias should be considered throughout the whole lifecycle of an adaptive design, with the estimation strategy prespecified in the statistical analysis plan. When available, unbiased or bias-reduced estimates are to be preferred.
Collapse
Affiliation(s)
| | - Babak Choodari-Oskooei
- MRC Clinical Trials Unit at UCL, Institute of Clinical Trials and Methodology, University College London, London, UK
| | - Munya Dimairo
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
| | - Laura Flight
- School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
| | | | - Thomas Jaki
- MRC Biostatistics Unit, University of Cambridge, Cambridge, UK
- Faculty of Informatics and Data Science, University of Regensburg, Regensburg, Germany
| |
Collapse
|
4
|
Han L, Arfè A, Trippa L. Sensitivity Analyses of Clinical Trial Designs: Selecting Scenarios and Summarizing Operating Characteristics. AM STAT 2023; 78:76-87. [PMID: 38680760 PMCID: PMC11052542 DOI: 10.1080/00031305.2023.2216253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/14/2023] [Indexed: 05/01/2024]
Abstract
The use of simulation-based sensitivity analyses is fundamental for evaluating and comparing candidate designs of future clinical trials. In this context, sensitivity analyses are especially useful to assess the dependence of important design operating characteristics with respect to various unknown parameters. Typical examples of operating characteristics include the likelihood of detecting treatment effects and the average study duration, which depend on parameters that are unknown until after the onset of the clinical study, such as the distributions of the primary outcomes and patient profiles. Two crucial components of sensitivity analyses are (i) the choice of a set of plausible simulation scenarios and (ii) the list of operating characteristics of interest. We propose a new approach for choosing the set of scenarios to be included in a sensitivity analysis. We maximize a utility criterion that formalizes whether a specific set of sensitivity scenarios is adequate to summarize how the operating characteristics of the trial design vary across plausible values of the unknown parameters. Then, we use optimization techniques to select the best set of simulation scenarios (according to the criteria specified by the investigator) to exemplify the operating characteristics of the trial design. We illustrate our proposal in three trial designs.
Collapse
Affiliation(s)
- Larry Han
- Department of Biostatistics, Harvard T.H. Chan School of Public Health
| | - Andrea Arfè
- Department of Epidemiology and Biostatistics, Memorial Sloan Kettering Cancer Center
| | - Lorenzo Trippa
- Department of Biostatistics, Harvard T.H. Chan School of Public Health
- Department of Biostatistics and Computational Biology, Dana-Farber Cancer Institute
| |
Collapse
|
5
|
Granholm A, Kaas-Hansen BS, Lange T, Schjørring OL, Andersen LW, Perner A, Jensen AKG, Møller MH. An overview of methodological considerations regarding adaptive stopping, arm dropping, and randomization in clinical trials. J Clin Epidemiol 2023; 153:45-54. [PMID: 36400262 DOI: 10.1016/j.jclinepi.2022.11.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 10/17/2022] [Accepted: 11/02/2022] [Indexed: 11/18/2022]
Abstract
BACKGROUND AND OBJECTIVES Adaptive features may increase flexibility and efficiency of clinical trials, and improve participants' chances of being allocated to better interventions. Our objective is to provide thorough guidance on key methodological considerations for adaptive clinical trials. METHODS We provide an overview of key methodological considerations for clinical trials employing adaptive stopping, adaptive arm dropping, and response-adaptive randomization. We cover pros and cons of different decisions and provide guidance on using simulation to compare different adaptive trial designs. We focus on Bayesian multi-arm adaptive trials, although the same general considerations apply to frequentist adaptive trials. RESULTS We provide guidance on 1) interventions and possible common control, 2) outcome selection, follow-up duration and model choice, 3) timing of adaptive analyses, 4) decision rules for adaptive stopping and arm dropping, 5) randomization strategies, 6) performance metrics, their prioritization, and arm selection strategies, and 7) simulations, assessment of performance under different scenarios, and reporting. Finally, we provide an example using a newly developed R simulation engine that may be used to evaluate and compare different adaptive trial designs. CONCLUSION This overview may help trialists design better and more transparent adaptive clinical trials and to adequately compare them before initiation.
Collapse
Affiliation(s)
- Anders Granholm
- Department of Intensive Care, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark.
| | - Benjamin Skov Kaas-Hansen
- Department of Intensive Care, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark; Section of Biostatistics, Department of Public Health, University of Copenhagen, Copenhagen, Denmark
| | - Theis Lange
- Section of Biostatistics, Department of Public Health, University of Copenhagen, Copenhagen, Denmark
| | - Olav Lilleholt Schjørring
- Department of Anaesthesia and Intensive Care, Aalborg University Hospital, Aalborg, Denmark; Department of Clinical Medicine, Aalborg University, Aalborg, Denmark
| | - Lars W Andersen
- Research Center for Emergency Medicine, Department of Clinical Medicine, Aarhus University and Aarhus University Hospital, Aarhus, Denmark; Department of Anesthesiology and Intensive Care, Aarhus University Hospital, Aarhus, Denmark; Prehospital Emergency Medical Services, Central Denmark Region, Aarhus, Denmark
| | - Anders Perner
- Department of Intensive Care, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| | - Aksel Karl Georg Jensen
- Section of Biostatistics, Department of Public Health, University of Copenhagen, Copenhagen, Denmark
| | - Morten Hylander Møller
- Department of Intensive Care, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
| |
Collapse
|
6
|
Bayesian Methods in Human Drug and Biological Products Development in CDER and CBER. Ther Innov Regul Sci 2022; 57:436-444. [PMID: 36459346 PMCID: PMC9718464 DOI: 10.1007/s43441-022-00483-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 11/22/2022] [Indexed: 12/04/2022]
Abstract
The Center for Drug Evaluation and Research (CDER) and the Center for Biologics Evaluation and Research (CBER) of the U.S. Food and Drug Administration (FDA) have been leaders in protecting and promoting the U.S. public health by helping to ensure that safe and effective drugs and biological products are available in the United States for those who need them. The null hypothesis significance testing approach, along with other considerations, is typically used to demonstrate the effectiveness of a drug or biological product. The Bayesian framework presents an alternative approach to demonstrate the effectiveness of a treatment. This article discusses the Bayesian framework for drug and biological product development, highlights key settings in which Bayesian approaches may be appropriate, and provides recent examples of the use of Bayesian approaches within CDER and CBER.
Collapse
|
7
|
Park JJH, Detry MA, Murthy S, Guyatt G, Mills EJ. How to Use and Interpret the Results of a Platform Trial: Users' Guide to the Medical Literature. JAMA 2022; 327:67-74. [PMID: 34982138 DOI: 10.1001/jama.2021.22507] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Platform trials are a type of randomized clinical trial that allow simultaneous comparison of multiple intervention groups against a single control group that serves as a common control based on a prespecified interim analysis plan. The platform trial design enables introduction of new interventions after the trial is initiated to evaluate multiple interventions in an ongoing manner using a single overarching protocol called a master (or core) protocol. When multiple treatment candidates are available, rapid scientific therapeutic discoveries may be made. Platform trials have important potential advantages in creating an efficient trial infrastructure that can help address critical clinical questions as the evidence evolves. Platform trials have recently been used in investigations of evolving therapies for patients with COVID-19. The purpose of this Users' Guide to the Medical Literature is to describe fundamental concepts of platform trials and master protocols and review issues in the conduct and interpretation of these studies. This Users' Guide is intended to help clinicians and readers understand articles reporting on interventions evaluated using platform trial designs.
Collapse
Affiliation(s)
- Jay J H Park
- Division of Experimental Medicine, Department of Medicine, University of British Columbia, Vancouver, Canada
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
| | | | - Srinivas Murthy
- Department of Pediatrics, Faculty of Medicine, University of British Columbia, Vancouver, Canada
| | - Gordon Guyatt
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
| | - Edward J Mills
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, Ontario, Canada
- Cytel Inc, Vancouver, British Columbia, Canada
| |
Collapse
|
8
|
Park JJH, Ford N, Xavier D, Ashorn P, Grais RF, Bhutta ZA, Goossens H, Thorlund K, Socias ME, Mills EJ. Randomised trials at the level of the individual. LANCET GLOBAL HEALTH 2021; 9:e691-e700. [PMID: 33865474 DOI: 10.1016/s2214-109x(20)30540-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2020] [Revised: 12/08/2020] [Accepted: 12/10/2020] [Indexed: 12/31/2022]
Abstract
In global health research, short-term, small-scale clinical trials with fixed, two-arm trial designs that generally do not allow for major changes throughout the trial are the most common study design. Building on the introductory paper of this Series, this paper discusses data-driven approaches to clinical trial research across several adaptive trial designs, as well as the master protocol framework that can help to harmonise clinical trial research efforts in global health research. We provide a general framework for more efficient trial research, and we discuss the importance of considering different study designs in the planning stage with statistical simulations. We conclude this second Series paper by discussing the methodological and operational complexity of adaptive trial designs and master protocols and the current funding challenges that could limit uptake of these approaches in global health research.
Collapse
Affiliation(s)
- Jay J H Park
- Department of Experimental Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Nathan Ford
- Centre for Infectious Disease Epidemiology and Research, School of Public Health and Family Medicine, University of Cape Town, Cape Town, South Africa
| | - Denis Xavier
- Department of Pharmacology and Divison of Clinical Research, St John's Medical College, Bangalore, India
| | - Per Ashorn
- Faculty of Medicine and Health Technology, Tampere University, Tampere, Finland
| | | | - Zulfiqar A Bhutta
- Centre for Global Child Health, Hospital for Sick Children, Toronto, ON, Canada; Institute of Global Health and Development, and Centre of Excellence in Women and Child Health, Aga Khan University, Karachi, Pakistan
| | - Herman Goossens
- Laboratory of Medical Microbiology, University of Antwerp, Antwerp, Belgium
| | - Kristian Thorlund
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, ON, Canada
| | - Maria Eugenia Socias
- Fundación Huésped, Buenos Aires, Argentina; British Columbia Centre for Substance Use, Department of Medicine, University of British Columbia, Vancouver, BC, Canada
| | - Edward J Mills
- Department of Health Research Methods, Evidence, and Impact, McMaster University, Hamilton, ON, Canada; School of Public Health, University of Rwanda, Kigali, Rwanda; Cytel, Vancouver, BC, Canada.
| |
Collapse
|
9
|
McMillan G, Mayer C, Tang R, Liu Y, LaVange L, Antonijevic Z, Beckman RA. Planning for the Next Pandemic: Ethics and Innovation Today for Improved Clinical Trials Tomorrow. Stat Biopharm Res 2021; 14:22-27. [PMID: 37006380 PMCID: PMC10061983 DOI: 10.1080/19466315.2021.1918236] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 03/22/2021] [Accepted: 04/12/2021] [Indexed: 01/05/2023]
Abstract
The coronavirus pandemic has brought public attention to the steps required to produce valid scientific clinical research in drug development. Traditional ethical principles that guide clinical research remain the guiding compass for physicians, patients, public health officials, investigators, drug developers and the public. Accelerating the process of delivering safe and effective treatments and vaccines against COVID-19 is a moral imperative. The apparent clash between the regulated system of phased randomized clinical trials and urgent public health need requires leveraging innovation with ethical scientific rigor. We reflect on the Belmont principles of autonomy, beneficence and justice as the pandemic unfolds, and illustrate the role of innovative clinical trial designs in alleviating pandemic challenges. Our discussion highlights selected types of innovative trial design and correlates them with ethical parameters and public health benefits. Details are provided for platform trials and other innovative designs such as basket and umbrella trials, designs leveraging external data sources, multi-stage seamless trials, preplanned control arm data sharing between larger trials, and higher order systems of linked trials coordinated more broadly between individual trials and phases of development, recently introduced conceptually as "PIPELINEs."
Collapse
Affiliation(s)
- Gianna McMillan
- Bioethics Institute, Loyola Marymount University, Los Angeles, CA
| | | | - Rui Tang
- Servier Pharmaceuticals, Boston, MA
| | - Yi Liu
- Nektar Therapeutics, Data Science and Systems, San Francisco, CA
| | - Lisa LaVange
- Department of Biostatistics, University of North Carolina, Chapel Hill, NC
| | | | - Robert A. Beckman
- Departments of Oncology and of Biostatistics, Bioinformatics, and Biomathematics, Lombardi Comprehensive Cancer Center and Innovation Center for Biomedical Informatics, Georgetown University Medical Center, Washington, DC
| |
Collapse
|
10
|
Sverdlov O, Ryeznik Y, Wong WK. Opportunity for efficiency in clinical development: An overview of adaptive clinical trial designs and innovative machine learning tools, with examples from the cardiovascular field. Contemp Clin Trials 2021; 105:106397. [PMID: 33845209 DOI: 10.1016/j.cct.2021.106397] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/05/2021] [Indexed: 11/30/2022]
Abstract
Modern data analysis tools and statistical modeling techniques are increasingly used in clinical research to improve diagnosis, estimate disease progression and predict treatment outcomes. What seems less emphasized is the importance of the study design, which can have a serious impact on the study cost, time and statistical efficiency. This paper provides an overview of different types of adaptive designs in clinical trials and their applications to cardiovascular trials. We highlight recent proliferation of work on adaptive designs over the past two decades, including some recent regulatory guidelines on complex trial designs and master protocols. We also describe the increasing role of machine learning and use of metaheuristics to construct increasingly complex adaptive designs or to identify interesting features for improved predictions and classifications.
Collapse
Affiliation(s)
- Oleksandr Sverdlov
- Early Development Biostatistics, Novartis Pharmaceuticals Corporation, USA.
| | - Yevgen Ryeznik
- Department of Pharmaceutical Biosciences, Uppsala University, Sweden
| | - Weng Kee Wong
- Department of Biostatistics, University of California Los Angeles, USA
| |
Collapse
|
11
|
Hamasaki T, Bretz F. Statistics in Biopharmaceutical Research Best Papers Award. Stat Biopharm Res 2021. [DOI: 10.1080/19466315.2021.1912479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
| | - Frank Bretz
- Clinical Development & Analytics, Novartis Pharma, Basel, Switzerland
- Section for Medical Statistics, Medical University of Vienna, Vienna, Austria
| |
Collapse
|
12
|
van Hoogdalem EJ, van Iersel MT, Winter E, Constant J, Kappler M. Pharmacology-Guided Rule-Based Adaptive Dose Escalation in First-in-Human Studies. Clin Pharmacol Ther 2020; 109:1326-1333. [PMID: 33150581 DOI: 10.1002/cpt.2101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Accepted: 10/14/2020] [Indexed: 11/06/2022]
Abstract
First-in-human (FIH) studies typically progress through cohorts of fixed, standard size throughout the escalation scheme. This work presents and tests a pharmacology-guided rule-based adaptive dose escalation design that aims at making "best use" of participants in early clinical drug evaluation; it is paper based, not requiring real-time access to computational methods. The design minimizes the number of participants exposed to dose levels with low likelihood of being therapeutically relevant. Using criteria based on dose-limiting adverse event rate and on target exposure or target pharmacodynamics, the design increases the sample size when approaching the dose range of potential clinical relevance. The adaptive escalation design was retrospectively tested on actual data from a sample of 40 recently executed FIH studies with novel small and large molecules, and it was evaluated by simulating trials with three compounds with different therapeutic windows, i.e., representing a promising, unacceptable, and dubious profile. In retrospective evaluation of the adaptive escalation design, none of the cases overshot the actually reported top dose; one case resulted in a top dose that was within 20% under the estimated maximum tolerated dose in the original study. The median reduction of total number of participants per study was 38%. Trial simulations confirmed the retrospective evaluation, showing a similar performance of the adaptive escalation design compared with the conventional 6 + 2 design, at a reduced study size for compounds with a presumed acceptable therapeutic window. The adaptive escalation design was shown to make "best use" of participants in FIH studies without compromising safety.
Collapse
Affiliation(s)
| | | | | | - John Constant
- PRA Health Sciences, Scientific Affairs, Victoria, British Columbia, Canada
| | - Martin Kappler
- PRA Health Sciences, Statistical Consulting Services, Levallois-Perret, France
| |
Collapse
|
13
|
Zhan T, Zhang H, Hartford A, Mukhopadhyay S. Modified Goldilocks Design with strict type I error control in confirmatory clinical trials. J Biopharm Stat 2020; 30:821-833. [PMID: 32297825 DOI: 10.1080/10543406.2020.1744620] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Goldilocks Design (GD) utilizes predictive probability to adaptively select a trial's sample size based on accumulating data. In order to control type I error at a desired level for a subset of the null space, extensive simulations at the study design stage are required to choose critical values, which is a challenge for this type of Bayesian adaptive design to be used for confirmatory trials. In this article, we propose a Modified Goldilocks Design (MGD) where type I error is analytically controlled over the entire null space. We do so by applying the conditional invariance principle and a combination test approach on [Formula: see text]-values that are obtained from independent cohorts of subjects. Simulation studies show that despite analytic control of type I error rate, the proposed MGD has similar power when compared with the original GD. We further apply it to an example trial with time-to-event endpoint in oncology.
Collapse
Affiliation(s)
- Tianyu Zhan
- Data and Statistical Sciences, AbbVie Inc ., North Chicago, IL, USA
| | - Hongtao Zhang
- Global Biometric and Data Sciences, Bristol Myers Squibb, Berkeley Heights , NJ, USA
| | - Alan Hartford
- Statistical and Quantitative Sciences, Data Sciences Institute, Research and Development, Takeda Pharmaceuticals USA, Inc ., Cambridge, MA, USA
| | | |
Collapse
|
14
|
Ghadessi M, Tang R, Zhou J, Liu R, Wang C, Toyoizumi K, Mei C, Zhang L, Deng CQ, Beckman RA. A roadmap to using historical controls in clinical trials - by Drug Information Association Adaptive Design Scientific Working Group (DIA-ADSWG). Orphanet J Rare Dis 2020; 15:69. [PMID: 32164754 PMCID: PMC7069184 DOI: 10.1186/s13023-020-1332-x] [Citation(s) in RCA: 70] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Accepted: 02/07/2020] [Indexed: 11/26/2022] Open
Abstract
Historical controls (HCs) can be used for model parameter estimation at the study design phase, adaptation within a study, or supplementation or replacement of a control arm. Currently on the latter, there is no practical roadmap from design to analysis of a clinical trial to address selection and inclusion of HCs, while maintaining scientific validity. This paper provides a comprehensive roadmap for planning, conducting, analyzing and reporting of studies using HCs, mainly when a randomized clinical trial is not possible. We review recent applications of HC in clinical trials, in which either predominantly a large treatment effect overcame concerns about bias, or the trial targeted a life-threatening disease with no treatment options. In contrast, we address how the evidentiary standard of a trial can be strengthened with optimized study designs and analysis strategies, emphasizing rare and pediatric indications. We highlight the importance of simulation and sensitivity analyses for estimating the range of uncertainties in the estimation of treatment effect when traditional randomization is not possible. Overall, the paper provides a roadmap for using HCs.
Collapse
Affiliation(s)
- Mercedeh Ghadessi
- Data Science & Analytics, Bayer U.S. LLC, Pharmaceuticals, 100 Bayer Boulevard, Whippany, NJ 07981 USA
| | - Rui Tang
- Center of Excellence, Methodology and Data Visualization, Biostatistics Department, Servier pharmaceuticals, 200 Pier Four Blvd, Boston, MA 02210 USA
| | - Joey Zhou
- Biometrics, Xcovery LLC, Pharmaceuticals, 11780 U.S. Hwy 1 N #202, Palm Beach Gardens, FL 33408 USA
| | - Rong Liu
- Bristol-Myers Squibb, 300 Connell Drive, 7th, Berkeley Heights, NJ 07922 USA
| | - Chenkun Wang
- Biostatistics department, Vertex Pharmaceuticals, Inc, 50 Northern Avenue, Boston, MA 02210 USA
| | - Kiichiro Toyoizumi
- Biometrics, Shionogi Inc, 300 Campus Drive Florham Park, Florham Park, NJ 07932 USA
| | - Chaoqun Mei
- Institute for Clinical and Translational Research, Department of Biostatistics and Medical Informatics, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53726 USA
| | - Lixia Zhang
- Scipher Medicine, 260 Charles St Path, Waltham, MA 02453 USA
| | - C. Q. Deng
- United Therapeutic Corp, Research Triangle Park, Durham, NC 27709 USA
| | - Robert A. Beckman
- Lombardi Comprehensive Cancer Center and Innovation Center for Biomedical Informatics, Georgetown University Medical Center, Washington, DC 20007 USA
| |
Collapse
|