1
|
Palmer CR. An ethically-motivated, Bayesian, adaptive design clinical trial bringing hope to women with menorrhagia…and warmth to statisticians' hearts. EBioMedicine 2021; 69:103461. [PMID: 34224974 PMCID: PMC8264100 DOI: 10.1016/j.ebiom.2021.103461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 06/11/2021] [Indexed: 11/29/2022] Open
|
2
|
Sim J. Outcome-adaptive randomization in clinical trials: issues of participant welfare and autonomy. THEORETICAL MEDICINE AND BIOETHICS 2019; 40:83-101. [PMID: 30778720 PMCID: PMC6478640 DOI: 10.1007/s11017-019-09481-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Outcome-adaptive randomization (OAR) has been proposed as a corrective to certain ethical difficulties inherent in the traditional randomized clinical trial (RCT) using fixed-ratio randomization. In particular, it has been suggested that OAR redresses the balance between individual and collective ethics in favour of the former. In this paper, I examine issues of welfare and autonomy arising in relation to OAR. A central issue in discussions of welfare in OAR is equipoise, and the moral status of OAR is crucially influenced by the way in which this concept is construed. If OAR is based on a model of equipoise that demands strict indifference between competing interventions throughout the trial, such equipoise is disturbed by accruing data favouring one treatment over another; OAR seeks to redress this by weighting randomization to the seemingly superior treatment. However, this is a partial response, as patients continue to be allocated to the inferior therapy. Moreover, it rests upon considerations of aggregate harms and benefits, and does not therefore uphold individual ethics. Issues of fairness also arise, as early and late enrollees are randomized on a different basis. Fixed-ratio randomization represents a fuller and more consistent response to a loss of equipoise, as so construed. With regard to consent, the complexity of OAR poses challenges to adequate disclosure and comprehension. Additionally, OAR does not offer a remedy to the therapeutic misconception-participants' tendency to attribute treatment allocation in an RCT to individual clinical judgments, rather than to scientific considerations-and, if anything, accentuates rather than alleviates this misconception. In relation to these issues, OAR fails to offer ethical advantages over fixed-ratio randomization. More broadly, the ethical basis of OAR can be seen to lie more in collective than in individual ethics, and overall it fares worse in this territory than fixed-ratio randomization.
Collapse
Affiliation(s)
- Julius Sim
- Institute for Primary Care and Health Sciences, Keele University, Staffordshire, ST5 5BG, UK.
| |
Collapse
|
3
|
Norris DC. Dose Titration Algorithm Tuning (DTAT) should supersede 'the' Maximum Tolerated Dose (MTD) in oncology dose-finding trials. F1000Res 2017; 6:112. [PMID: 28663782 PMCID: PMC5473410 DOI: 10.12688/f1000research.10624.3] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 07/05/2017] [Indexed: 11/20/2022] Open
Abstract
Background. Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent 'confirmatory' Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of 'the' maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as 'dose-finding', but as dose titration algorithm (DTA)-finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug's population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace 'the' MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents.
Collapse
|
4
|
Norris DC. Dose Titration Algorithm Tuning (DTAT) should supersede 'the' Maximum Tolerated Dose (MTD) in oncology dose-finding trials. F1000Res 2017; 6:112. [PMID: 28663782 DOI: 10.12688/f1000research.10624.2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/29/2017] [Indexed: 11/20/2022] Open
Abstract
Background. Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent 'confirmatory' Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of 'the' maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as 'dose-finding', but as dose titration algorithm (DTA)-finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug's population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace 'the' MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents.
Collapse
|
5
|
Grieve AP. Response-adaptive clinical trials: case studies in the medical literature. Pharm Stat 2016; 16:64-86. [PMID: 27730735 DOI: 10.1002/pst.1778] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2016] [Revised: 07/02/2016] [Accepted: 08/19/2016] [Indexed: 12/20/2022]
Abstract
The past 15 years has seen many pharmaceutical sponsors consider and implement adaptive designs (AD) across all phases of drug development. Given their arrival at the turn of the millennium, we might think that they are a recent invention. That is not the case. The earliest idea of an AD predates Bradford Hill's MRC tuberculosis study, appearing in Biometrika in 1933. In this paper, we trace the development of response-ADs, designs in which the allocation to intervention arms depends on the responses of subjects already treated. We describe some statistical details underlying the designs, but our main focus is to describe and comment on ADs from the medical research literature. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Andrew P Grieve
- Innovation Centre, 3 Globeside Business Park, Marlow, Buckinghamshire, SL7 1HZ, UK
| |
Collapse
|
6
|
Heilig CM, Weijer C. A critical history of individual and collective ethics in the lineage of Lellouch and Schwartz. Clin Trials 2016; 2:244-53. [PMID: 16279147 DOI: 10.1191/1740774505cn084oa] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
The notions of individual and collective ethics were first explicitly defined in the biostatistical literature in 1971 to motivate a mathematical solution to a posed ethical dilemma. This paper reviews key antecedents to these concepts and traces explicit references to them over time, primarily in the biostatistical literature. Following a historical exposition of these texts, a critical thematic analysis shows the following: the normative force of these concepts has not been adequately argued. Individual and collective ethics do not solve the problem of how to use accumulating data to inform ethical action. The notions of the “individual” and the “collective” are too vague to prompt clear moral imperatives, especially in difficult cases. These concepts have not been successfully linked to a standard ethical framework. Finally, the paper concludes with the observation that a systematic, comprehensive ethical framework must be identified to fulfill the intuitions behind individual and collective ethics.
Collapse
Affiliation(s)
- Charles M Heilig
- Division of Reproductive Health, Centers for Disease Control and Prevention, Atlanta, GA 30341-371 7, USA.
| | | |
Collapse
|
7
|
Grieve AP. How to test hypotheses if you must. Pharm Stat 2015; 14:139-50. [PMID: 25641830 DOI: 10.1002/pst.1667] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2014] [Revised: 10/16/2014] [Accepted: 12/01/2014] [Indexed: 12/30/2022]
Abstract
Drug development is not the only industrial-scientific enterprise subject to government regulations. In some fields of ecology and environmental sciences, the application of statistical methods is also regulated by ordinance. Over the past 20years, ecologists and environmental scientists have argued against an unthinking application of null hypothesis significance tests. More recently, Canadian ecologists have suggested a new approach to significance testing, taking account of the costs of both type I and type II errors. In this paper, we investigate the implications of this for testing in drug development and demonstrate that its adoption leads directly to the likelihood principle and Bayesian approaches.
Collapse
Affiliation(s)
- Andrew P Grieve
- ICON Adaptive Trials Innovation Centre, Icon Plc, Marlow, Buckinghamshire, UK
| |
Collapse
|
8
|
Villar SS, Bowden J, Wason J. Multi-armed Bandit Models for the Optimal Design of Clinical Trials: Benefits and Challenges. Stat Sci 2015; 30:199-215. [PMID: 27158186 PMCID: PMC4856206 DOI: 10.1214/14-sts504] [Citation(s) in RCA: 55] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Multi-armed bandit problems (MABPs) are a special type of optimal control problem well suited to model resource allocation under uncertainty in a wide variety of contexts. Since the first publication of the optimal solution of the classic MABP by a dynamic index rule, the bandit literature quickly diversified and emerged as an active research topic. Across this literature, the use of bandit models to optimally design clinical trials became a typical motivating application, yet little of the resulting theory has ever been used in the actual design and analysis of clinical trials. To this end, we review two MABP decision-theoretic approaches to the optimal allocation of treatments in a clinical trial: the infinite-horizon Bayesian Bernoulli MABP and the finite-horizon variant. These models possess distinct theoretical properties and lead to separate allocation rules in a clinical trial design context. We evaluate their performance compared to other allocation rules, including fixed randomization. Our results indicate that bandit approaches offer significant advantages, in terms of assigning more patients to better treatments, and severe limitations, in terms of their resulting statistical power. We propose a novel bandit-based patient allocation rule that overcomes the issue of low power, thus removing a potential barrier for their use in practice.
Collapse
Affiliation(s)
- Sofía S. Villar
- Investigator Statistician at MRC BSU, Cambridge and Biometrika post-doctoral research fellow
| | - Jack Bowden
- Senior Investigator Statistician at MRC BSU, Cambridge
| | - James Wason
- Senior Investigator Statistician at MRC BSU, Cambridge, MRC Biostatistics Unit, Cambridge Institute of Public Health, Forvie Site, Robinson Way, Cambridge Biomedical Campus, Cambridge CB2 0SR, United Kingdom
| |
Collapse
|
9
|
Russu A, van Zwet E, De Nicolao G, Della Pasqua O. Modelling of the outcome of non-inferiority trials by integration of historical data. J Pharmacokinet Pharmacodyn 2011; 38:595-612. [PMID: 21858724 PMCID: PMC3172410 DOI: 10.1007/s10928-011-9210-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2011] [Accepted: 07/23/2011] [Indexed: 11/06/2022]
Abstract
The approval and differentiation of new compounds in clinical development often demands non-inferiority trials, in which the test drug is compared against a reference treatment. However, non-inferiority trials impose major operational burden with serious ethical and scientific implications for the development of new medicines. Traditional approaches make limited use of historical information on placebo and neglect inter-trial variability, relying on the constancy assumption that the control-to-placebo effect size is maintained across trials. We propose a model-based approach that overcomes such limitations and may be used as a tool to explore differentiation during clinical development. Parameter distributions are introduced which reflect the heterogeneity of trials. The method is illustrated using data from impetigo trials. Based on simulation scenarios, this Bayesian technique yields a definitive, consistent increase in the statistical power over two accepted statistical methods, allowing lower sample size requirements for the assessment of non-inferiority.
Collapse
Affiliation(s)
- Alberto Russu
- Department of Computer Engineering and Systems Science, University of Pavia, Pavia, Italy
| | - Erik van Zwet
- Bioinformatics Center of Expertise, LUMC, Leiden, The Netherlands
| | - Giuseppe De Nicolao
- Department of Computer Engineering and Systems Science, University of Pavia, Pavia, Italy
| | - Oscar Della Pasqua
- Clinical Pharmacology and Discovery Medicine, GlaxoSmithKline, Stockley Park, UK
- Division of Pharmacology, Leiden/Amsterdam Center for Drug Research, PO Box 9502, 2300 RA Leiden, The Netherlands
| |
Collapse
|
10
|
Thall PF. Bayesian Models and Decision Algorithms for Complex Early Phase Clinical Trials. Stat Sci 2010; 25:227-244. [PMID: 21318084 DOI: 10.1214/09-sts315] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
An early phase clinical trial is the first step in evaluating the effects in humans of a potential new anti-disease agent or combination of agents. Usually called "phase I" or "phase I/II" trials, these experiments typically have the nominal scientific goal of determining an acceptable dose, most often based on adverse event probabilities. This arose from a tradition of phase I trials to evaluate cytotoxic agents for treating cancer, although some methods may be applied in other medical settings, such as treatment of stroke or immunological diseases. Most modern statistical designs for early phase trials include model-based, outcome-adaptive decision rules that choose doses for successive patient cohorts based on data from previous patients in the trial. Such designs have seen limited use in clinical practice, however, due to their complexity, the requirement of intensive, computer-based data monitoring, and the medical community's resistance to change. Still, many actual applications of model-based outcome-adaptive designs have been remarkably successful in terms of both patient benefit and scientific outcome. In this paper, I will review several Bayesian early phase trial designs that were tailored to accommodate specific complexities of the treatment regime and patient outcomes in particular clinical settings.
Collapse
Affiliation(s)
- Peter F Thall
- Department of Biostatistics, University of Texas, M.D. Anderson Cancer Center, Houston, Texas, USA
| |
Collapse
|
11
|
Schellings R, Kessels AG, ter Riet G, Sturmans F, Widdershoven GA, Knottnerus JA. Indications and requirements for the use of prerandomization. J Clin Epidemiol 2008; 62:393-9. [PMID: 19056237 DOI: 10.1016/j.jclinepi.2008.07.010] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2007] [Revised: 07/15/2008] [Accepted: 07/26/2008] [Indexed: 11/27/2022]
Abstract
BACKGROUND AND OBJECTIVE Although in effectiveness studies, the conventional randomized trial, in which informed consent is obtained before randomization, is the first choice, this design is not the panacea for all research questions. To counter contamination problems, prerandomization designs might be an alternative. Prerandomization implies that the randomization takes place before seeking informed consent, and because of this, prerandomization designs are controversial among ethicists, health lawyers, methodologists, and clinicians. However, in the Netherlands, these designs are becoming more accepted since the Dutch State Secretary of Health, Welfare and Sport decided that, under certain circumstances, prerandomization is admissible and not in conflict with the law. RESULTS Based on well-defined indications and requirements, guidelines for the optimal application of prerandomization designs are presented. Designs in which prerandomization is used are outlined; methodological considerations useful when conducting trials using conventional designs or prerandomization designs are discussed, in addition to ethical and judicial aspects. CONCLUSION In certain situations, prerandomization designs have an essential contribution to achieve evidence-based medicine. Banning prerandomization a priori implies that information about the effectiveness of numerous public health and medical interventions will not be forthcoming. Therefore, every design should be based on a balance between maximizing the potential for patient autonomy and minimizing the bias caused by contamination. This balance cannot be reached by formulating general rules, but an independent group of experts, like members of research ethics committee (REC), should decide whether this balance is acceptable.
Collapse
Affiliation(s)
- Ron Schellings
- Public Health Supervisory Service of the Netherlands, The Inspectorate of Health Care, Den Bosch, the Netherlands.
| | | | | | | | | | | |
Collapse
|
12
|
Zhou X, Liu S, Kim ES, Herbst RS, Lee JJ. Bayesian adaptive design for targeted therapy development in lung cancer--a step toward personalized medicine. Clin Trials 2008; 5:181-93. [PMID: 18559407 DOI: 10.1177/1740774508091815] [Citation(s) in RCA: 156] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
BACKGROUND With the advancement in biomedicine, many biologically targeted therapies have been developed. These targeted agents, however, may not work for everyone. Biomarker profiles can be used to identify effective targeted therapies. Our goals are to characterize the molecular signature of individual tumors, offer the best-fit targeted therapies to patients in a study, and identify promising agents for future development. METHODS We propose an outcome-based adaptive randomization trial design for patients with advanced stage non-small cell lung cancer. All patients have baseline biopsy samples taken for biomarker assessment prior to randomization to treatments. The primary endpoint of this study is the disease control rate at 8 weeks after randomization. The Bayesian probit model is used to characterize the disease control rate. Patients are adaptively randomized to one of four treatments with the randomization rate based on the updated disease control rate from the accumulated data in the trial. For each biomarker profile, high-performing treatments have higher randomization rates, and vice versa. An early stopping rule is implemented to suspend low-performing treatments from randomization. RESULTS Based on extensive simulation studies, with a total of 200 evaluable patients, our trial has desirable operating characteristics to: (1) identify effective agents with a high probability; (2) suspend ineffective agents; and (3) treat more patients with effective agents that correspond to their biomarker profiles. Our trial design continues to update and refine the estimates as the trial progresses. LIMITATIONS This biomarker-based trial requires biopsible tumors and a two-week turn around time for biomarker profiling before randomization. Additionally, in order to learn from the interim data and adjust the randomization rate accordingly, the outcome-based adaptive randomization design is applicable only for trials when the endpoint can be assessed in a relative short period of time. CONCLUSION Bayesian adaptive randomization trial design is a smart, novel, and ethical design. In conjunction with an early stopping rule, it can be used to efficiently identify effective agents, eliminate ineffective ones, and match effective treatments with patients' biomarker profiles. The proposed design is suitable for the development of targeted therapies and provides a rational design for personalized medicine.
Collapse
Affiliation(s)
- Xian Zhou
- Department of Biostatistics, The University of Texas M.D. Anderson Cancer Center, Houston, Texas 77030, USA
| | | | | | | | | |
Collapse
|
13
|
Palmer CR, Shahumyan H. Implementing a decision-theoretic design in clinical trials: why and how? Stat Med 2008; 26:4939-57. [PMID: 17582801 DOI: 10.1002/sim.2949] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
This paper addresses two main questions: first, why should Bayesian and other innovative, data-dependent design models be put into practice and, secondly, given the past dearth of actual applications, how might one example of such a design be implemented in a genuine example trial? Clinical trials amalgamate theory, practice and ethics, but this last point has become relegated to the background, rather than taking often a more appropriate primary role. Trial practice has evolved but has its roots in R. A. Fisher's randomized agricultural field trials of the 1920s. Reasons for, and consequences of, this are discussed from an ethical standpoint, drawing on an under-used dichotomy introduced by French authors Lellouch and Schwartz (Int. Statist. Rev. 1971; 39:27-36). Plenty of ethically motivated designs for trials, including Bayesian designs have been proposed, but have found little application thus far. One reason for this is a lack of awareness of such alternative designs among trialists, while another reason is a lack of user-friendly software to allow study simulations. To encourage implementation, a new C++ program called 'Daniel' is introduced, offering much potential to assist the design of today's randomized controlled trials. Daniel evaluates a particular decision-theoretic method suitable for coping with either two or three Bernoulli response treatments with input features allowing user-specified choices of: patient horizon (number to be treated before and after the comparative stages of the trial); an arbitrary fixed trial truncation size (to allow ready comparison with traditional designs or to cope with practical constraints); anticipated success rates and a measure of their uncertainty (a matter ignored in standard power calculations); and clinically relevant, and irrelevant, differences in treatment effect sizes. Error probabilities and expected trial durations can be thoroughly explored via simulation, it being better by far to harm 'computer patients' instead of real ones. Suppose the objective in a clinical trial is to select between two treatments using a maximum horizon of 500 patients, when the truly superior treatment is expected to yield a 40 per cent success rate, but is believed to really range between 20 and 60 per cent. Simulation studies show that to detect a clinically relevant, absolute difference of 10 per cent between treatments, simulation studies show the decision-theoretic procedure would treat a mean 68 pairs of patients (SD 37) before correctly identifying the better treatment 96.7 per cent of the time, an error rate of 3.3 per cent. Having made a recommendation based on these patients, the remaining, on average 364 individuals, could either be given the indicated treatment, knowing its choice is optimal for the chosen horizon, or, alternatively, they could be entered into another, separate clinical trial. For comparison, a fixed sample size trial, with standard 5 per cent level of significance and 80 per cent power to detect a 10 per cent difference, requires treating over 700 patients in two groups, with the half allocated to inferior treatment considerably outnumbering the 68 expected under the decision-theoretic design, and the overall number simply too high for realistic application. In brief, the keys to answering the above 'why?' and 'how?' questions are ethics and software, respectively. Wider implications, both pros and cons, of implementing the particular method described will be discussed, with the overall conclusion that, where appropriate, clinical trials are now ready to undergo modernization from the agricultural age to the information age.
Collapse
|
14
|
Guimaraes P, Palesch Y. Power and sample size simulations for Randomized Play-the-Winner rules. Contemp Clin Trials 2007; 28:487-99. [PMID: 17321219 DOI: 10.1016/j.cct.2007.01.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2006] [Revised: 12/28/2006] [Accepted: 01/11/2007] [Indexed: 10/23/2022]
Abstract
Response-adaptive randomization procedures, such as the Randomized Play-the-Winner (RPW), are treatment allocation rules for clinical trials that use available information on treatment outcomes to skew the allocation probability in favor of the treatment performing better thus far in the trial. Such allocation rules are based on the ethically desirable aim of reducing the share of patients allocated to the inferior treatment. This noble intent is overcome by statistical and logistical issues. One practical implementation obstacle of the RPW method is the estimation of required sample size and expected allocation shares. Unfortunately, this information is not readily available or easy to calculate. We present simulation results to provide a realistic assessment of the power and sample size required for successful implementation of the RPW rule for a study with primary outcome variable that is binary. Additionally, we discuss some practical approaches for sample size determination based on the RPW.
Collapse
Affiliation(s)
- Paulo Guimaraes
- Department of Biostatistics, Bioinformatics and Epidemiology, Medical University of South Carolina, 135 Cannon St. Suite 303, Charleston, SC 29425, USA.
| | | |
Collapse
|
15
|
Gurrin LC, Burton PR. A COMPARISON OF THE IMPRECISE BETA CLASS, THE RANDOMIZED PLAY-THE-WINNER RULE AND THE TRIANGULAR TEST FOR CLINICAL TRIALS WITH BINARY RESPONSES. AUST NZ J STAT 2007. [DOI: 10.1111/j.1467-842x.2006.00456.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
16
|
Schellings R, Kessels AG, ter Riet G, Knottnerus JA, Sturmans F. Randomized consent designs in randomized controlled trials: Systematic literature search. Contemp Clin Trials 2006; 27:320-32. [PMID: 16388991 DOI: 10.1016/j.cct.2005.11.009] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2004] [Revised: 09/14/2005] [Accepted: 11/22/2005] [Indexed: 11/28/2022]
Abstract
BACKGROUND Three types of randomized consent designs are distinguished and ranked according to the extent to which participants are informed about treatment options: single-consent (those in the experimental group learn about their assigned treatment), incomplete-double-consent (all participants learn about their assigned treatment), and complete-double-consent (all participants learn about all treatments studied). All are methodologically, ethically, and judicially controversial. Even so, their use is justified if blinding is deemed necessary, but impossible to achieve by sham procedures (placebo), and experimental treatment seems attractive to potential participants. OBJECTIVE The aim of this study is to give a comprehensive overview of the use of randomized consent designs. Data sources are MEDLINE (1/1977-2/2003), EMBASE (1/1984-2/2003), PsycINFO (1/1996-2/2003), the Cochrane Library, and the Science Citation Index database. REVIEW METHODS Eligible were studies using a randomized consent design. Cluster randomized trials were excluded. One reviewer selected and data-extracted eligible papers. A second reviewer independently data-extracted 10% of the papers. Data on country of study conduct, year of commencement, area of medicine, type of design, reason(s) for use, details on approval by a research ethics committee, the index and reference intervention, nature of endpoints, and details on collection of data were extracted. Furthermore, for each trial, the rates of non-compliance and loss to follow-up were registered by treatment arm. The three types of randomized consent designs were compared as to differences between the rates of non-compliance and loss to follow-up in the separate trial arms. RESULTS Randomized consent designs are seldom used (n=50). When used, they have often been used in the wrong circumstances (misuse). In 65% of the studies the non-compliance in the index group is larger than in the reference group. Contrary to expectation, trials using the incomplete-double design were associated with significantly higher rates of non-compliance and loss to follow-up in the reference groups than trials employing the other two versions. CONCLUSION Trialists and physicians should be aware of the proper indication for the use of randomized consent designs.
Collapse
Affiliation(s)
- Ron Schellings
- Public Health Supervisory Service of the Netherlands, the Health Care Inspectorate, The Hague, P.O. Box 90137 5200 MA Den Bosch, The Netherlands.
| | | | | | | | | |
Collapse
|
17
|
Abstract
A probabilistic explication is offered of equipoise and uncertainty in clinical trials. In order to be useful in the justification of clinical trials, equipoise has to be interpreted in terms of overlapping probability distributions of possible treatment outcomes, rather than point estimates representing expectation values. Uncertainty about treatment outcomes is shown to be a necessary but insufficient condition for the ethical defensibility of clinical trials. Additional requirements are proposed for the nature of that uncertainty. The indecisiveness of our criteria for cautious decision-making under uncertainty creates the leeway that makes clinical trials defensible.
Collapse
Affiliation(s)
- Sven Ove Hansson
- Department of Philosophy and the History of Technology, Royal Institute of Technology, Teknikringen 78, 100 44 Stockholm, Sweden.
| |
Collapse
|
18
|
Neuhäuser M, Bretz F. Adaptive designs based on the truncated product method. BMC Med Res Methodol 2005; 5:30. [PMID: 16171518 PMCID: PMC1242234 DOI: 10.1186/1471-2288-5-30] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2004] [Accepted: 09/19/2005] [Indexed: 11/10/2022] Open
Abstract
Background Adaptive designs are becoming increasingly important in clinical research. One approach subdivides the study into several (two or more) stages and combines the p-values of the different stages using Fisher's combination test. Methods Alternatively to Fisher's test, the recently proposed truncated product method (TPM) can be applied to combine the p-values. The TPM uses the product of only those p-values that do not exceed some fixed cut-off value. Here, these two competing analyses are compared. Results When an early termination due to insufficient effects is not appropriate, such as in dose-response analyses, the probability to stop the trial early with the rejection of the null hypothesis is increased when the TPM is applied. Therefore, the expected total sample size is decreased. This decrease in the sample size is not connected with a loss in power. The TPM turns out to be less advantageous, when an early termination of the study due to insufficient effects is possible. This is due to a decrease of the probability to stop the trial early. Conclusion It is recommended to apply the TPM rather than Fisher's combination test whenever an early termination due to insufficient effects is not suitable within the adaptive design.
Collapse
Affiliation(s)
- Markus Neuhäuser
- Institute for Medical Informatics, Biometry and Epidemiology, University of Duisburg-Essen, Hufelandstr. 55, D-45122 Essen, Germany
| | - Frank Bretz
- Novartis Pharma AG, WSJ-27.1.005, 4002 Basel, Switzerland
| |
Collapse
|
19
|
Block KI, Cohen AJ, Dobs AS, Ornish D, Tripathy D. The challenges of randomized trials in integrative cancer care. Integr Cancer Ther 2004; 3:112-27. [PMID: 15165498 DOI: 10.1177/1534735404265668] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Affiliation(s)
- Keith I Block
- Block Center for Integrative Cancer Care, 1800 Sherman, Suite 515, Evanston IL 60201, USA.
| | | | | | | | | |
Collapse
|