1
|
Ying X, Robinson KA, Ehrhardt S. Re-evaluating the role of pilot trials in informing effect and sample size estimates for full-scale trials: a meta-epidemiological study. BMJ Evid Based Med 2023; 28:383-391. [PMID: 37491141 DOI: 10.1136/bmjebm-2023-112358] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/09/2023] [Indexed: 07/27/2023]
Abstract
BACKGROUND Some have argued that pilot trials have little value for informing the expected effect size of a subsequent large trial. This study aims to empirically evaluate the roles of pilot trials in informing the effect and sample size estimates of a full-scale trial. METHODS We conducted a search in PubMed on 19 February 2022, for all pilot trials published between 2005 and 2018 and their subsequent full-scale trials. We analysed the agreement in results by comparing the direction and magnitude of the effect size in the pilot trial and full-scale trial. Logistic regression was used to explore whether a significant pilot trial and other characteristics were associated with a significant full-scale trial. RESULTS A total of 248 pairs of pilot and full-scale trials were analysed. Full-scale trials with a significant pilot trial were 2.72 times more likely to find a significant result for the primary efficacy outcome than those with a non-significant pilot trial (95% CI 1.52 to 4.86, p=0.001). The association remained significant irrespective of changes made to the trial design. In 73% of the pairs, the pilot trial produced a larger point estimate than the subsequent full-scale trial, but 87% of pairs had a 95% CI estimated by the pilot trial that covered the full-scale trial point estimate. Full-scale trials with a sample size estimated using the SD from the pilot trial were less likely to yield a significant result (OR=0.26, 95% CI 0.10 to 0.65, p=0.004). CONCLUSION Pilot trials can provide strong signals on intervention efficacy. When determining the sample size for full-scale trials, using the CI bounds from the pilot trials instead of the point estimate may improve power estimation.
Collapse
Affiliation(s)
- Xiangji Ying
- Department of Epidemiology, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, USA
| | - Karen A Robinson
- Department of Epidemiology, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, USA
- Department of Medicine, School of Medicine, Johns Hopkins University, Baltimore, Maryland, USA
- Section Evidence-Based Practice, Western Norway University of Applied Sciences, Bergen, Norway
| | - Stephan Ehrhardt
- Department of Epidemiology, Bloomberg School of Public Health, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
2
|
Zhang YY, Rong TZ, Li MM. Analytical calculations of various powers assuming normality. Seq Anal 2021. [DOI: 10.1080/07474946.2021.2010411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Ying-Ying Zhang
- Department of Statistics and Actuarial Science, College of Mathematics and Statistics, Chongqing University, Chongqing, China
- Chongqing Key Laboratory of Analytic Mathematics and Applications, Chongqing University, Chongqing, China
| | - Teng-Zhong Rong
- Department of Statistics and Actuarial Science, College of Mathematics and Statistics, Chongqing University, Chongqing, China
- Chongqing Key Laboratory of Analytic Mathematics and Applications, Chongqing University, Chongqing, China
| | - Man-Man Li
- Department of Statistics and Actuarial Science, College of Mathematics and Statistics, Chongqing University, Chongqing, China
- Chongqing Key Laboratory of Analytic Mathematics and Applications, Chongqing University, Chongqing, China
| |
Collapse
|
3
|
Rothwell JC, Julious SA, Cooper CL. Adjusting for bias in the mean for primary and secondary outcomes when trials are in sequence. Pharm Stat 2021; 21:460-475. [PMID: 34860471 DOI: 10.1002/pst.2180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 11/10/2021] [Accepted: 11/18/2021] [Indexed: 11/11/2022]
Abstract
When designing a clinical trial, one key aspect of the design is the sample size calculation. The sample size calculation tends to rely on a target or expected difference. The expected difference can be based on the observed data from previous studies, which results in bias. It has been reported that large treatment effects observed in trials are often not replicated in subsequent trials. If these values are used to design subsequent studies, the sample sizes may be biased which results in an unethical study. Regression to the mean (RTM) is one explanation for this. If only health technologies which meet a particular continuation criterion (such as p < 0.05 in the first study) are progressed to a second confirmatory trial, it is highly likely that the observed effect in the second trial will be lower than that observed in the first trial. It will be shown how when moving from one trial to the next, a truncated normal distribution is inherently imposed on the first study. This results in a lower observed effect size in the second trial. A simple adjustment method is proposed based on the mathematical properties of the truncated normal distribution. This adjustment method was confirmed using simulations in R and compared with other previous adjustments. The method can be applied to the observed effect in a trial, which is being used in the design of a second confirmatory trial, resulting in a more stable estimate for the 'true' treatment effect. The adjustment accounts for the bias in the primary and secondary endpoints in the first trial with the bias being affected by the power of that study. Tables of results have been provided to aid implementation, along with a worked example. In summary, there is a bias introduced when the point estimate from one trial is used to assist the design of a second trial. It is recommended that any observed point estimates be used with caution and the adjustment method developed in this article be implemented to significantly reduce this bias.
Collapse
Affiliation(s)
- Joanne C Rothwell
- Biostatistics, Parexel International, Sheffield, UK.,Design Trials and Statistics, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
| | - Steven A Julious
- Design Trials and Statistics, School of Health and Related Research (ScHARR), University of Sheffield, Sheffield, UK
| | - Cindy L Cooper
- Sheffield Clinical Trials Unit, ScHARR, University of Sheffield, Sheffield, UK
| |
Collapse
|
4
|
Wiklund SJ, Burman CF. Selection bias, investment decisions and treatment effect distributions. Pharm Stat 2021; 20:1168-1182. [PMID: 34002467 PMCID: PMC9290610 DOI: 10.1002/pst.2132] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2020] [Revised: 04/09/2021] [Accepted: 05/03/2021] [Indexed: 11/08/2022]
Abstract
When making decisions regarding the investment and design for a Phase 3 programme in the development of a new drug, the results from preceding Phase 2 trials are an important source of information. However, only projects in which the Phase 2 results show promising treatment effects will typically be considered for a Phase 3 investment decision. This implies that, for those projects where Phase 3 is pursued, the underlying Phase 2 estimates are subject to selection bias. We will in this article investigate the nature of this selection bias based on a selection of distributions for the treatment effect. We illustrate some properties of Bayesian estimates, providing shrinkage of the Phase 2 estimate to counteract the selection bias. We further give some empirical guidance regarding the choice of prior distribution and comment on the consequences for decision-making in investment and planning for Phase 3 programmes.
Collapse
|
5
|
Shi Y, Liu F, Li S, Chen J. Accounting for Pilot Study Uncertainty in Sample Size Determination of Randomized Controlled Trials. Stat Biopharm Res 2020. [DOI: 10.1080/19466315.2020.1831951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Affiliation(s)
- Yaru Shi
- Biostatistics and Research Decision Sciences, Merck & Co., Inc., North Wales, PA
| | - Fang Liu
- Biostatistics and Research Decision Sciences, Merck & Co., Inc., North Wales, PA
| | - Se Li
- Pharmacoepidemiology, Center for Observational and Real-World Evidence, Merck & Co., Inc., West Point, PA
| | | |
Collapse
|
6
|
Erdmann S, Kirchner M, Götte H, Kieser M. Optimal designs for phase II/III drug development programs including methods for discounting of phase II results. BMC Med Res Methodol 2020; 20:253. [PMID: 33036572 PMCID: PMC7547445 DOI: 10.1186/s12874-020-01093-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2020] [Accepted: 08/03/2020] [Indexed: 11/10/2022] Open
Abstract
Background Go/no-go decisions after phase II and sample size chosen for phase III are usually based on phase II results (e.g., the treatment effect estimate of phase II). Due to the decision rule (only promising phase II results lead to phase III), treatment effect estimates from phase II that initiate a phase III trial commonly overestimate the true treatment effect. Underpowered phase III trials are the consequence. Optimistic findings may then not be reproduced, leading to the failure of potentially expensive drug development programs. For some disease areas these failure rates are described to be quite high: 62.5%. Methods We integrate the ideas of multiplicative and additive adjustment of treatment effect estimates after go decisions in a utility-based framework for optimizing drug development programs. The design of a phase II/III program, i.e., the “right amount of adjustment”, the allocation of the resources to phase II and III in terms of sample size, and the rule applied to decide whether to stop or to proceed with phase III influences its success considerably. Given specific drug development program characteristics (e.g., fixed and variable per patient costs for phase II and III, probable gain in case of market launch), optimal designs with respect to the maximal expected utility can be identified by the proposed Bayesian-frequentist approach. The method will be illustrated by application to practical examples characteristic for oncological studies. Results In general, our results show that the program set-ups with adjusted treatment effect estimate used for phase III planning are superior to the “naïve” program set-ups with respect to the maximal expected utility. Therefore, we recommend considering an adjusted phase II treatment effect estimate for the phase III sample size calculation. However, there is no one-fits-all design. Conclusion Individual drug development planning for a specific program is necessary to find the optimal design. The optimal choice of the design parameters for a specific drug development program at hand can be found by our user friendly R Shiny application and package (both assessable open-source via [1]).
Collapse
Affiliation(s)
- Stella Erdmann
- Institute of Medical Biometry and Informatics, University of Heidelberg, Im Neuenheimer Feld 130.3, D-69120, Heidelberg, Germany.
| | - Marietta Kirchner
- Institute of Medical Biometry and Informatics, University of Heidelberg, Im Neuenheimer Feld 130.3, D-69120, Heidelberg, Germany
| | - Heiko Götte
- Merck Healthcare KGaA, Frankfurter Str. 250, D-64293, Darmstadt, Germany
| | - Meinhard Kieser
- Institute of Medical Biometry and Informatics, University of Heidelberg, Im Neuenheimer Feld 130.3, D-69120, Heidelberg, Germany
| |
Collapse
|
7
|
Qu Y, Du Y, Zhang Y, Shen L. Understanding and adjusting for the selection bias from a proof‐of‐concept study to a more confirmatory study. Stat Med 2020; 39:4593-4604. [DOI: 10.1002/sim.8740] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 05/02/2020] [Accepted: 08/08/2020] [Indexed: 01/16/2023]
Affiliation(s)
- Yongming Qu
- Department of Biometrics Eli Lilly and Company Indianapolis Indiana USA
| | - Yu Du
- Department of Biometrics Eli Lilly and Company Indianapolis Indiana USA
| | - Ying Zhang
- Department of Biometrics Eli Lilly and Company Indianapolis Indiana USA
| | - Lei Shen
- Department of Biometrics Eli Lilly and Company Indianapolis Indiana USA
| |
Collapse
|
8
|
Zhang YY, Ting N. Can the Concept Be Proven? STATISTICS IN BIOSCIENCES 2020. [DOI: 10.1007/s12561-020-09290-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
Calderazzo S, Wiesenfarth M, Kopp-Schneider A. A decision-theoretic approach to Bayesian clinical trial design and evaluation of robustness to prior-data conflict. Biostatistics 2020; 23:328-344. [PMID: 32735010 PMCID: PMC9118338 DOI: 10.1093/biostatistics/kxaa027] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 06/24/2020] [Accepted: 06/26/2020] [Indexed: 11/29/2022] Open
Abstract
Bayesian clinical trials allow taking advantage of relevant external information through the elicitation of prior distributions, which influence Bayesian posterior parameter estimates and test decisions. However, incorporation of historical information can have harmful consequences on the trial’s frequentist (conditional) operating characteristics in case of inconsistency between prior information and the newly collected data. A compromise between meaningful incorporation of historical information and strict control of frequentist error rates is therefore often sought. Our aim is thus to review and investigate the rationale and consequences of different approaches to relaxing strict frequentist control of error rates from a Bayesian decision-theoretic viewpoint. In particular, we define an integrated risk which incorporates losses arising from testing, estimation, and sampling. A weighted combination of the integrated risk addends arising from testing and estimation allows moving smoothly between these two targets. Furthermore, we explore different possible elicitations of the test error costs, leading to test decisions based either on posterior probabilities, or solely on Bayes factors. Sensitivity analyses are performed following the convention which makes a distinction between the prior of the data-generating process, and the analysis prior adopted to fit the data. Simulation in the case of normal and binomial outcomes and an application to a one-arm proof-of-concept trial, exemplify how such analysis can be conducted to explore sensitivity of the integrated risk, the operating characteristics, and the optimal sample size, to prior-data conflict. Robust analysis prior specifications, which gradually discount potentially conflicting prior information, are also included for comparison. Guidance with respect to cost elicitation, particularly in the context of a Phase II proof-of-concept trial, is provided.
Collapse
Affiliation(s)
- Silvia Calderazzo
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 581, 69120 Heidelberg, Germany
| | - Manuel Wiesenfarth
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 581, 69120 Heidelberg, Germany
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center, Im Neuenheimer Feld 581, 69120 Heidelberg, Germany
| |
Collapse
|
10
|
Casula D, Callegaro A, Nakanwagi P, Weynants V, Arora AK. Evaluation of an Adaptive Seamless Design for a Phase II/III Clinical Trial in Recurrent Events Data to Demonstrate Reduction in Number of Acute Exacerbations in Patients With Chronic Obstructive Pulmonary Disease (COPD). Stat Biopharm Res 2020. [DOI: 10.1080/19466315.2020.1764382] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
11
|
Arfè A, Alexander B, Trippa L. Optimality of testing procedures for survival data in the nonproportional hazards setting. Biometrics 2020; 77:587-598. [PMID: 32535892 DOI: 10.1111/biom.13315] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Revised: 05/25/2020] [Accepted: 05/27/2020] [Indexed: 02/06/2023]
Abstract
Most statistical tests for treatment effects used in randomized clinical trials with survival outcomes are based on the proportional hazards assumption, which often fails in practice. Data from early exploratory studies may provide evidence of nonproportional hazards, which can guide the choice of alternative tests in the design of practice-changing confirmatory trials. We developed a test to detect treatment effects in a late-stage trial, which accounts for the deviations from proportional hazards suggested by early-stage data. Conditional on early-stage data, among all tests that control the frequentist Type I error rate at a fixed α level, our testing procedure maximizes the Bayesian predictive probability that the study will demonstrate the efficacy of the experimental treatment. Hence, the proposed test provides a useful benchmark for other tests commonly used in the presence of nonproportional hazards, for example, weighted log-rank tests. We illustrate this approach in simulations based on data from a published cancer immunotherapy phase III trial.
Collapse
Affiliation(s)
- Andrea Arfè
- Harvard-MIT Center for Regulatory Science, Harvard Medical School, Boston, Massachusetts
| | - Brian Alexander
- Department of Data Sciences, Dana-Farber Cancer Institute, Boston, Massachusetts
| | - Lorenzo Trippa
- Department of Data Sciences, Dana-Farber Cancer Institute, Boston, Massachusetts
| |
Collapse
|
12
|
Kerschbaumer A, Smolen JS, Herkner H, Stefanova T, Chwala E, Aletaha D. Efficacy outcomes in phase 2 and phase 3 randomized controlled trials in rheumatology. Nat Med 2020; 26:974-980. [PMID: 32313250 DOI: 10.1038/s41591-020-0833-4] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2019] [Accepted: 03/11/2020] [Indexed: 12/17/2022]
Abstract
Phase 3 trials are the mainstay of drug development across medicine but have often not met expectations set by preceding phase 2 studies. A systematic meta-analysis evaluated all randomized controlled, double-blind trials investigating targeted disease-modifying anti-rheumatic drugs in rheumatoid and psoriatic arthritis. Primary outcomes of American College of Rheumatology (ACR) 20 responses were compared by mixed-model logistic regression, including exploration of potential determinants of efficacy overestimation. In rheumatoid arthritis, phase 2 trial outcomes systematically overestimated subsequent phase 3 results (odds ratio comparing ACR20 in phase 2 versus phase 3: 1.39, 95% confidence interval: 1.25-1.57, P < 0.001). Data for psoriatic arthritis trials were similar, but not statistically significant (odds ratio comparing ACR20 in phase 2 versus phase 3: 1.35, 95% confidence interval: 0.94-1.94, P = 0.09). Differences in inclusion criteria largely explained the observed differences in efficacy findings. Our findings have implications for all stakeholders in new therapeutic development and testing, as well as potential ethical implications.
Collapse
Affiliation(s)
- Andreas Kerschbaumer
- Division of Rheumatology, Department of Internal Medicine III, Medical University of Vienna, Vienna, Austria
| | - Josef S Smolen
- Division of Rheumatology, Department of Internal Medicine III, Medical University of Vienna, Vienna, Austria
| | - Harald Herkner
- Department for Emergency Medicine, Medical University of Vienna, Vienna, Austria
| | - Tijen Stefanova
- Division of Rheumatology, Department of Internal Medicine III, Medical University of Vienna, Vienna, Austria
| | - Eva Chwala
- University Library, Medical University of Vienna, Vienna, Austria
| | - Daniel Aletaha
- Division of Rheumatology, Department of Internal Medicine III, Medical University of Vienna, Vienna, Austria.
| |
Collapse
|
13
|
Kirby S, Li J, Chuang-Stein C. Selection bias for treatments with positive Phase 2 results. Pharm Stat 2020; 19:679-691. [PMID: 32291941 DOI: 10.1002/pst.2024] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 02/24/2020] [Accepted: 03/11/2020] [Indexed: 11/06/2022]
Abstract
In drug development, treatments are most often selected at Phase 2 for further development when an initial trial of a new treatment produces a result that is considered positive. This selection due to a positive result means, however, that an estimator of the treatment effect, which does not take account of the selection is likely to over-estimate the true treatment effect (ie, will be biased). This bias can be large and researchers may face a disappointingly lower estimated treatment effect in further trials. In this paper, we review a number of methods that have been proposed to correct for this bias and introduce three new methods. We present results from applying the various methods to two examples and consider extensions of the examples. We assess the methods with respect to bias of estimation of the treatment effect and compare the probabilities that a bias-corrected treatment effect estimate will exceed a decision threshold. Following previous work, we also compare average power for the situation where a Phase 3 trial is launched given that the bias-corrected observed Phase 2 treatment effect exceeds a launch threshold. Finally, we discuss our findings and potential application of the bias correction methods.
Collapse
Affiliation(s)
| | - Jianjun Li
- Eisai Inc., Woodcliff Lake, New Jersey, USA
| | | |
Collapse
|
14
|
De Martini D. Empowering phase II clinical trials to reduce phase III failures. Pharm Stat 2019; 19:178-186. [PMID: 31729173 DOI: 10.1002/pst.1980] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Revised: 07/03/2019] [Accepted: 07/15/2019] [Indexed: 12/13/2022]
Abstract
The large number of failures in phase III clinical trials, which occur at a rate of approximately 45%, is studied herein relative to possible countermeasures. First, the phenomenon of failures is numerically described. Second, the main reasons for failures are reported, together with some generic improvements suggested in the related literature. This study shows how statistics explain, but do not justify, the high failure rate observed. The rate of failures due to a lack of efficacy that are not expected, is considered to be at least 10%. Expanding phase II is the simplest and most intuitive way to reduce phase III failures since it can reduce phase III false negative findings and launches of phase III trials when the treatment is positive but suboptimal. Moreover, phase II enlargement is discussed using an economic profile. As resources for research are often limited, enlarging phase II should be evaluated on a case-by-case basis. Alternative strategies, such as biomarker-based enrichments and adaptive designs, may aid in reducing failures. However, these strategies also have very low application rates with little likelihood of rapid growth.
Collapse
|
15
|
Kieser M, Kirchner M, Dölger E, Götte H. Optimal planning of phase II/III programs for clinical trials with multiple endpoints. Pharm Stat 2018; 17:437-457. [PMID: 29700949 DOI: 10.1002/pst.1861] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Revised: 01/23/2018] [Accepted: 02/22/2018] [Indexed: 11/08/2022]
Abstract
Owing to increased costs and competition pressure, drug development becomes more and more challenging. Therefore, there is a strong need for improving efficiency of clinical research by developing and applying methods for quantitative decision making. In this context, the integrated planning for phase II/III programs plays an important role as numerous quantities can be varied that are crucial for cost, benefit, and program success. Recently, a utility-based framework has been proposed for an optimal planning of phase II/III programs that puts the choice of decision boundaries and phase II sample sizes on a quantitative basis. However, this method is restricted to studies with a single time-to-event endpoint. We generalize this procedure to the setting of clinical trials with multiple endpoints and (asymptotically) normally distributed test statistics. Optimal phase II sample sizes and go/no-go decision rules are provided for both the "all-or-none" and "at-least-one" win criteria. Application of the proposed method is illustrated by drug development programs in the fields of Alzheimer disease and oncology.
Collapse
Affiliation(s)
- Meinhard Kieser
- Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany
| | - Marietta Kirchner
- Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany
| | - Eva Dölger
- Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany
| | | |
Collapse
|
16
|
Shimura M, Maruo K, Gosho M. Conditional estimation using prior information in 2-stage group sequential designs assuming asymptotic normality when the trial terminated early. Pharm Stat 2018; 17:400-413. [PMID: 29687592 DOI: 10.1002/pst.1859] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2017] [Revised: 01/15/2018] [Accepted: 02/22/2018] [Indexed: 11/12/2022]
Abstract
Two-stage designs are widely used to determine whether a clinical trial should be terminated early. In such trials, a maximum likelihood estimate is often adopted to describe the difference in efficacy between the experimental and reference treatments; however, this method is known to display conditional bias. To reduce such bias, a conditional mean-adjusted estimator (CMAE) has been proposed, although the remaining bias may be nonnegligible when a trial is stopped for efficacy at the interim analysis. We propose a new estimator for adjusting the conditional bias of the treatment effect by extending the idea of the CMAE. This estimator is calculated by weighting the maximum likelihood estimate obtained at the interim analysis and the effect size prespecified when calculating the sample size. We evaluate the performance of the proposed estimator through analytical and simulation studies in various settings in which a trial is stopped for efficacy or futility at the interim analysis. We find that the conditional bias of the proposed estimator is smaller than that of the CMAE when the information time at the interim analysis is small. In addition, the mean-squared error of the proposed estimator is also smaller than that of the CMAE. In conclusion, we recommend the use of the proposed estimator for trials that are terminated early for efficacy or futility.
Collapse
Affiliation(s)
- Masashi Shimura
- Data Science Department, Taiho Pharmaceutical Co, Ltd, Chiyoda-ku, Tokyo, Japan.,Graduate School of Comprehensive Human Sciences, University of Tsukuba, Tsukuba, Ibaraki, Japan
| | - Kazushi Maruo
- Translational Medical Center, National Center of Neurology and Psychiatry, Kodaira, Tokyo, Japan
| | - Masahiko Gosho
- Department of Clinical Trial and Clinical Epidemiology, Faculty of Medicine, University of Tsukuba, Tsukuba, Ibaraki, Japan
| |
Collapse
|
17
|
Miller F, Burman CF. A decision theoretical modeling for Phase III investments and drug licensing. J Biopharm Stat 2017; 28:698-721. [PMID: 28920757 DOI: 10.1080/10543406.2017.1377729] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
For a new candidate drug to become an approved medicine, several decision points have to be passed. In this article, we focus on two of them: First, based on Phase II data, the commercial sponsor decides to invest (or not) in Phase III. Second, based on the outcome of Phase III, the regulator determines whether the drug should be granted market access. Assuming a population of candidate drugs with a distribution of true efficacy, we optimize the two stakeholders' decisions and study the interdependence between them. The regulator is assumed to seek to optimize the total public health benefit resulting from the efficacy of the drug and a safety penalty. In optimizing the regulatory rules, in terms of minimal required sample size and the Type I error in Phase III, we have to consider how these rules will modify the commercial optimization made by the sponsor. The results indicate that different Type I errors should be used depending on the rarity of the disease.
Collapse
Affiliation(s)
- Frank Miller
- a Department of Statistics , Stockholm University , Stockholm , Sweden
| | - Carl-Fredrik Burman
- b Biometrics & Information Science , AstraZeneca R&D , Mölndal , Sweden.,c Department of Mathematical Sciences , Chalmers University of Technology and Göteborg University , Gothenburg , Sweden
| |
Collapse
|
18
|
Abstract
BACKGROUND Adaptation by design consists in conservatively estimating the phase III sample size on the basis of phase II data; it is also called conservative sample size estimation (CSSE). The usual assumptions are that the effect size is the same in both phases and that phase II data are not used for phase III confirmatory analysis. CSSE has been introduced to increase the rate of successful trials, and it can be applied in most clinical areas. CSSE reduces the probability of underpowered experiments and can improve the overall success probability of phase II and III, but it also increases phase III sample size, increasing the time and cost of experiments. Thus, the balance between higher revenue and greater cost is the issue. METHODS A profit model was built assuming that CSSE was applied and considering income per patient, annual incidence, time on market, market share, phase III success probability, fixed cost of the 2 phases, and cost per patient under treatment. RESULTS Profit turns out to be a random variable depending on phase II sample size and conservativeness. Profit moments are obtained in a closed formula. Profit utility, which is a linear function of profit expectation and volatility, is evaluated in accordance with the modern theory of investment performances. Indications regarding phase II sample size and conservativeness can be derived on the basis of utility, for example, through utility optimization. CONCLUSIONS CSSE can be adopted in many different statistical problems, and consequently the profit evaluations proposed here can be widely applied.
Collapse
Affiliation(s)
- Daniele De Martini
- 1 Dipartimento DiSMeQ, Università degli Studi di Milano-Bicocca, Milan, Italy
| |
Collapse
|
19
|
Kulinskaya E, Huggins R, Dogo SH. Sequential biases in accumulating evidence. Res Synth Methods 2016; 7:294-305. [PMID: 26626562 PMCID: PMC5031232 DOI: 10.1002/jrsm.1185] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2015] [Revised: 07/03/2015] [Accepted: 08/27/2015] [Indexed: 11/10/2022]
Abstract
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed 'sequential decision bias' and 'sequential design bias', are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed-effect and the random-effects models of meta-analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence-based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd.
Collapse
Affiliation(s)
- Elena Kulinskaya
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK.
| | - Richard Huggins
- Department of Mathematics and Statistics, University of Melbourne, Melbourne, Australia
| | - Samson Henry Dogo
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ, UK
| |
Collapse
|
20
|
Faya P, Seaman JW, Stamey JD. Bayesian assurance and sample size determination in the process validation life-cycle. J Biopharm Stat 2016; 27:159-174. [DOI: 10.1080/10543406.2016.1148717] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Paul Faya
- Allergan, Inc., Parsippany, New Jersey, USA
- Department of Statistical Science, Baylor University, Waco, Texas, USA
| | - John W. Seaman
- Department of Statistical Science, Baylor University, Waco, Texas, USA
| | - James D. Stamey
- Department of Statistical Science, Baylor University, Waco, Texas, USA
| |
Collapse
|
21
|
Kirchner M, Kieser M, Götte H, Schüler A. Utility-based optimization of phase II/III programs. Stat Med 2015; 35:305-16. [PMID: 26256550 DOI: 10.1002/sim.6624] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2014] [Revised: 06/02/2015] [Accepted: 07/26/2015] [Indexed: 11/10/2022]
Abstract
Phase II and phase III trials play a crucial role in drug development programs. They are costly and time consuming and, because of high failure rates in late development stages, at the same time risky investments. Commonly, sample size calculation of phase III is based on the treatment effect observed in phase II. Therefore, planning of phases II and III can be linked. The performance of the phase II/III program crucially depends on the allocation of the resources to phases II and III by appropriate choice of the sample size and the rule applied to decide whether to stop the program after phase II or to proceed. We present methods for a program-wise phase II/III planning that aim at determining optimal phase II sample sizes and go/no-go decisions in a time-to-event setting. Optimization is based on a utility function that takes into account (fixed and variable) costs of the drug development program and potential gains after successful launch. The proposed methods are illustrated by application to a variety of scenarios typically met in oncology drug development.
Collapse
Affiliation(s)
- Marietta Kirchner
- Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany
| | - Meinhard Kieser
- Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany
| | | | | |
Collapse
|
22
|
Chuang-Stein C, Kirby S. The shrinking or disappearing observed treatment effect. Pharm Stat 2014; 13:277-80. [PMID: 25182453 DOI: 10.1002/pst.1633] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2014] [Revised: 05/22/2014] [Accepted: 07/08/2014] [Indexed: 11/06/2022]
Abstract
It is frequently noted that an initial clinical trial finding was not reproduced in a later trial. This is often met with some surprise. Yet, there is a relatively straightforward reason partially responsible for this observation. In this article, we examine this reason by first reviewing some findings in a recent publication in the Journal of the American Medical Association. To help explain the non-negligible chance of failing to reproduce a previous positive finding, we compare a series of trials to successive diagnostic tests used for identifying a condition. To help explain the suspicion that the treatment effect, when observed in a subsequent trial, seems to have decreased in magnitude, we draw a conceptual analogy between phases II-III development stages and interim analyses of a trial with a group sequential design. Both analogies remind us that what we observed in an early trial could be a false positive or a random high. We discuss statistical sources for these occurrences and discuss why it is important for statisticians to take these into consideration when designing and interpreting trial results.
Collapse
|
23
|
|
24
|
Wang SJ, Hung HMJ, O'Neill R. Paradigms for adaptive statistical information designs: practical experiences and strategies. Stat Med 2012; 31:3011-23. [PMID: 22927234 DOI: 10.1002/sim.5410] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2012] [Accepted: 03/16/2012] [Indexed: 11/07/2022]
Abstract
In the last decade or so, interest in adaptive design clinical trials has gradually been directed towards their use in regulatory submissions by pharmaceutical drug sponsors to evaluate investigational new drugs. Methodological advances of adaptive designs are abundant in the statistical literature since the 1970s. The adaptive design paradigm has been enthusiastically perceived to increase the efficiency and to be more cost-effective than the fixed design paradigm for drug development. Much interest in adaptive designs is in those studies with two-stages, where stage 1 is exploratory and stage 2 depends upon stage 1 results, but where the data of both stages will be combined to yield statistical evidence for use as that of a pivotal registration trial. It was not until the recent release of the US Food and Drug Administration Draft Guidance for Industry on Adaptive Design Clinical Trials for Drugs and Biologics (2010) that the boundaries of flexibility for adaptive designs were specifically considered for regulatory purposes, including what are exploratory goals, and what are the goals of adequate and well-controlled (A&WC) trials (2002). The guidance carefully described these distinctions in an attempt to minimize the confusion between the goals of preliminary learning phases of drug development, which are inherently substantially uncertain, and the definitive inference-based phases of drug development. In this paper, in addition to discussing some aspects of adaptive designs in a confirmatory study setting, we underscore the value of adaptive designs when used in exploratory trials to improve planning of subsequent A&WC trials. One type of adaptation that is receiving attention is the re-estimation of the sample size during the course of the trial. We refer to this type of adaptation as an adaptive statistical information design. Specifically, a case example is used to illustrate how challenging it is to plan a confirmatory adaptive statistical information design. We highlight the substantial risk of planning the sample size for confirmatory trials when information is very uninformative and stipulate the advantages of adaptive statistical information designs for planning exploratory trials. Practical experiences and strategies as lessons learned from more recent adaptive design proposals will be discussed to pinpoint the improved utilities of adaptive design clinical trials and their potential to increase the chance of a successful drug development.
Collapse
Affiliation(s)
- Sue-Jane Wang
- Office of Biostatistics, Office of Translational Sciences, Center for Drug Evaluation and Research, US Food and Drug Administration, Silver Spring, MD, U.S.A.
| | | | | |
Collapse
|
25
|
Kirby S, Burke J, Chuang-Stein C, Sin C. Discounting phase 2 results when planning phase 3 clinical trials. Pharm Stat 2012; 11:373-85. [DOI: 10.1002/pst.1521] [Citation(s) in RCA: 39] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2011] [Revised: 05/08/2012] [Accepted: 05/08/2012] [Indexed: 11/10/2022]
Affiliation(s)
- S. Kirby
- Pfizer Limited; Ramsgate Road Sandwich UK CT13 9NJ
| | | | | | - C. Sin
- Pfizer Limited; Ramsgate Road Sandwich UK CT13 9NJ
| |
Collapse
|
26
|
De Martini D. Stability criteria for the outcomes of statistical tests to assess drug effectiveness with a single study. Pharm Stat 2012; 11:273-9. [PMID: 22422716 DOI: 10.1002/pst.1505] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2011] [Revised: 12/09/2011] [Accepted: 01/22/2012] [Indexed: 11/09/2022]
Abstract
At least two adequate and well-controlled clinical studies are usually required to support effectiveness of a certain treatment. In some circumstances, however, a single study providing strong results may be sufficient. Some statistical stability criteria for assessing whether a single study provides very persuasive results are known. A new criterion is introduced, and it is based on the conservative estimation of the reproducibility probability in addition to the possibility of performing statistical tests by referring directly to the reproducibility probability estimate. These stability criteria are compared numerically and conceptually. This work aims to help both regulatory agencies and pharmaceutical companies to decide if the results of a single study may be sufficient to establish effectiveness.
Collapse
|
27
|
Chuang-Stein C, Kirby S, French J, Kowalski K, Marshall S, Smith MK, Bycott P, Beltangady M. A Quantitative Approach for Making Go/No-Go Decisions in Drug Development. ACTA ACUST UNITED AC 2011. [DOI: 10.1177/009286151104500213] [Citation(s) in RCA: 45] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
28
|
De Martini D. Robustness and Corrections for Sample Size Adaptation Strategies Based on Effect Size Estimation. COMMUN STAT-SIMUL C 2011. [DOI: 10.1080/03610918.2011.568152] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
29
|
De Martini D. Adapting by calibration the sample size of a phase III trial on the basis of phase II data. Pharm Stat 2011; 10:89-95. [DOI: 10.1002/pst.410] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
30
|
Wang SJ, Hung HMJ, O'Neill R. Adaptive design clinical trials and trial logistics models in CNS drug development. Eur Neuropsychopharmacol 2011; 21:159-66. [PMID: 20933373 DOI: 10.1016/j.euroneuro.2010.09.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2010] [Revised: 09/08/2010] [Accepted: 09/09/2010] [Indexed: 10/19/2022]
Abstract
In central nervous system therapeutic areas, there are general concerns with establishing efficacy thought to be sources of high attrition rate in drug development. For instance, efficacy endpoints are often subjective and highly variable. There is a lack of robust or operational biomarkers to substitute for soft endpoints. In addition, animal models are generally poor, unreliable or unpredictive. To increase the probability of success in central nervous system drug development program, adaptive design has been considered as an alternative designs that provides flexibility to the conventional fixed designs and has been viewed to have the potential to improve the efficiency in drug development processes. In addition, successful implementation of an adaptive design trial relies on establishment of a trustworthy logistics model that ensures integrity of the trial conduct. In accordance with the spirit of the U.S. Food and Drug Administration adaptive design draft guidance document recently released, this paper enlists the critical considerations from both methodological aspects and regulatory aspects in reviewing an adaptive design proposal and discusses two general types of adaptations, sample size planning and re-estimation, and two-stage adaptive design. Literature examples of adaptive designs in central nervous system are used to highlight the principles laid out in the U.S. FDA draft guidance. Four logistics models seen in regulatory adaptive design applications are introduced. In general, complex adaptive designs require simulation studies to access the design performance. For an adequate and well-controlled clinical trial, if a Learn-and-Confirm adaptive selection approach is considered, the study-wise type I error rate should be adhered to. However, it is controversial to use the simulated type I error rate to address a strong control of the study-wise type I error rate.
Collapse
Affiliation(s)
- Sue-Jane Wang
- Office of Biostatistics, Office of Translational Sciences, Center for Drug Evaluation and Research, U.S. Food and Drug Administration, 10903 New Hampshire Ave., Silver Spring, MD 20993, USA.
| | | | | |
Collapse
|
31
|
Affiliation(s)
- Daniele De Martini
- a Dipartimento DIMEQUANT , Università degli Studi di Milano Bicocca , Milano, Italy
| |
Collapse
|
32
|
Wang SJ. Perspectives on the Use of Adaptive Designs in Clinical Trials. Part I. Statistical Considerations and Issues. J Biopharm Stat 2010; 20:1090-7. [DOI: 10.1080/10543406.2010.514446] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Sue-Jane Wang
- a Office of Biostatistics, Office of Translational Sciences, Center for Drug Evaluation and Research , U.S. Food and Drug Administration , Silver Spring, Maryland, USA
| |
Collapse
|
33
|
Wang SJ. The Bias Issue Under the Complete Null With Response Adaptive Randomization: Commentary on “Adaptive and Model-Based Dose-Ranging Trials: Quantitative Evaluation and Recommendation”. Stat Biopharm Res 2010. [DOI: 10.1198/sbr.2010.09054comm2] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
34
|
Chuang-Stein C, Kirby S, Hirsch I, Atkinson G. The role of the minimum clinically important difference and its impact on designing a trial. Pharm Stat 2010; 10:250-6. [PMID: 20936625 DOI: 10.1002/pst.459] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The minimum clinically important difference (MCID) between treatments is recognized as a key concept in the design and interpretation of results from a clinical trial. Yet even assuming such a difference can be derived, it is not necessarily clear how it should be used. In this paper, we consider three possible roles for the MCID. They are: (1) using the MCID to determine the required sample size so that the trial has a pre-specified statistical power to conclude a significant treatment effect when the treatment effect is equal to the MCID; (2) requiring with high probability, the observed treatment effect in a trial, in addition to being statistically significant, to be at least as large as the MCID; (3) demonstrating via hypothesis testing that the effect of the new treatment is at least as large as the MCID. We will examine the implications of the three different possible roles of the MCID on sample size, expectations of a new treatment, and the chance for a successful trial. We also give our opinion on how the MCID should generally be used in the design and interpretation of results from a clinical trial.
Collapse
|
35
|
Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics [excerpts]. Biotechnol Law Rep 2010. [DOI: 10.1089/blr.2010.9977] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
|
36
|
|
37
|
Adaptive design methods in clinical trials - a review. Orphanet J Rare Dis 2008; 3:11. [PMID: 18454853 PMCID: PMC2422839 DOI: 10.1186/1750-1172-3-11] [Citation(s) in RCA: 249] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2008] [Accepted: 05/02/2008] [Indexed: 12/04/2022] Open
Abstract
In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular due to its flexibility and efficiency. Based on adaptations applied, adaptive designs can be classified into three categories: prospective, concurrent (ad hoc), and retrospective adaptive designs. An adaptive design allows modifications made to trial and/or statistical procedures of ongoing clinical trials. However, it is a concern that the actual patient population after the adaptations could deviate from the originally target patient population and consequently the overall type I error (to erroneously claim efficacy for an infective drug) rate may not be controlled. In addition, major adaptations of trial and/or statistical procedures of on-going trials may result in a totally different trial that is unable to address the scientific/medical questions the trial intends to answer. In this article, several commonly considered adaptive designs in clinical trials are reviewed. Impacts of ad hoc adaptations (protocol amendments), challenges in by design (prospective) adaptations, and obstacles of retrospective adaptations are described. Strategies for the use of adaptive design in clinical development of rare diseases are discussed. Some examples concerning the development of Velcade intended for multiple myeloma and non-Hodgkin's lymphoma are given. Practical issues that are commonly encountered when implementing adaptive design methods in clinical trials are also discussed.
Collapse
|
38
|
Wang SJ. Discussion of the "White Paper of the PhRMA Working Group on adaptive dose-ranging designs". J Biopharm Stat 2008; 17:1015-20; discussion 1029-32. [PMID: 18027212 DOI: 10.1080/10543400701643897] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Sue-Jane Wang
- Center for Drug Evaluation and Research, US Food and Drug Administration, Rockvill 20857-0001, Maryland, USA.
| |
Collapse
|