1
|
Watson SI, Girling A, Hemming K. Optimal study designs for cluster randomised trials: An overview of methods and results. Stat Methods Med Res 2023; 32:2135-2157. [PMID: 37802096 PMCID: PMC10683350 DOI: 10.1177/09622802231202379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
There are multiple possible cluster randomised trial designs that vary in when the clusters cross between control and intervention states, when observations are made within clusters, and how many observations are made at each time point. Identifying the most efficient study design is complex though, owing to the correlation between observations within clusters and over time. In this article, we present a review of statistical and computational methods for identifying optimal cluster randomised trial designs. We also adapt methods from the experimental design literature for experimental designs with correlated observations to the cluster trial context. We identify three broad classes of methods: using exact formulae for the treatment effect estimator variance for specific models to derive algorithms or weights for cluster sequences; generalised methods for estimating weights for experimental units; and, combinatorial optimisation algorithms to select an optimal subset of experimental units. We also discuss methods for rounding experimental weights, extensions to non-Gaussian models, and robust optimality. We present results from multiple cluster trial examples that compare the different methods, including determination of the optimal allocation of clusters across a set of cluster sequences and selecting the optimal number of single observations to make in each cluster-period for both Gaussian and non-Gaussian models, and including exchangeable and exponential decay covariance structures.
Collapse
|
2
|
Candel MJJM, van Breukelen GJP. Best (but oft forgotten) practices: Efficient sample sizes for commonly used trial designs. Am J Clin Nutr 2023; 117:1063-1085. [PMID: 37270287 DOI: 10.1016/j.ajcnut.2023.02.013] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 02/08/2023] [Accepted: 02/14/2023] [Indexed: 06/05/2023] Open
Abstract
Designing studies such that they have a high level of power to detect an effect or association of interest is an important tool to improve the quality and reproducibility of findings from such studies. Since resources (research subjects, time, and money) are scarce, it is important to obtain sufficient power with minimum use of such resources. For commonly used randomized trials of the treatment effect on a continuous outcome, designs are presented that minimize the number of subjects or the amount of research budget when aiming for a desired power level. This concerns the optimal allocation of subjects to treatments and, in case of nested designs such as cluster-randomized trials and multicenter trials, also the optimal number of centers versus the number of persons per center. Since such optimal designs require knowledge of parameters of the analysis model that are not known in the design stage, in particular outcome variances, maximin designs are presented. These designs guarantee a prespecified power level for plausible ranges of the unknown parameters and minimize research costs for the worst-case values of these parameters. The focus is on a 2-group parallel design, the AB/BA crossover design, and cluster-randomized and multicenter trials with a continuous outcome. How to calculate sample sizes for maximin designs is illustrated for examples from nutrition. Several computer programs that are helpful in calculating sample sizes for optimal and maximin designs are discussed as well as some results on optimal designs for other types of outcomes.
Collapse
Affiliation(s)
- Math J J M Candel
- Department of Methodology and Statistics, Care and Public Health Research Institute (CAPHRI), Maastricht University, Maastricht, Netherlands.
| | - Gerard J P van Breukelen
- Department of Methodology and Statistics, Care and Public Health Research Institute (CAPHRI), Maastricht University, Maastricht, Netherlands; Department of Methodology and Statistics, Graduate School of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
3
|
Tian Z, Esserman D, Tong G, Blaha O, Dziura J, Peduzzi P, Li F. Sample size calculation in hierarchical 2×2 factorial trials with unequal cluster sizes. Stat Med 2022; 41:645-664. [PMID: 34978097 PMCID: PMC8962918 DOI: 10.1002/sim.9284] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 11/19/2021] [Accepted: 11/25/2021] [Indexed: 11/08/2022]
Abstract
Motivated by a suicide prevention trial with hierarchical treatment allocation (cluster-level and individual-level treatments), we address the sample size requirements for testing the treatment effects as well as their interaction. We assume a linear mixed model, within which two types of treatment effect estimands (controlled effect and marginal effect) are defined. For each null hypothesis corresponding to an estimand, we derive sample size formulas based on large-sample z-approximation, and provide finite-sample modifications based on a t-approximation. We relax the equal cluster size assumption and express the sample size formulas as functions of the mean and coefficient of variation of cluster sizes. We show that the sample size requirement for testing the controlled effect of the cluster-level treatment is more sensitive to cluster size variability than that for testing the controlled effect of the individual-level treatment; the same observation holds for testing the marginal effects. In addition, we show that the sample size for testing the interaction effect is proportional to that for testing the controlled or the marginal effect of the individual-level treatment. We conduct extensive simulations to validate the proposed sample size formulas, and find the empirical power agrees well with the predicted power for each test. Furthermore, the t-approximations often provide better control of type I error rate with a small number of clusters. Finally, we illustrate our sample size formulas to design the motivating suicide prevention factorial trial. The proposed methods are implemented in the R package H2x2Factorial.
Collapse
Affiliation(s)
- Zizhong Tian
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA
| | - Denise Esserman
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA
| | - Guangyu Tong
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA
| | - Ondrej Blaha
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA
| | - James Dziura
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA
| | - Peter Peduzzi
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA
| | - Fan Li
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA,Center for Methods in Implementation and Prevention Science, Yale University, Connecticut, USA,Correspondence Fan Li, PhD, Department of Biostatistics, Yale School of Public Health, New Haven CT, 06510, USA,
| |
Collapse
|
4
|
Optimal allocations for two treatment comparisons within the proportional odds cumulative logits model. PLoS One 2021; 16:e0250119. [PMID: 33882086 PMCID: PMC8059828 DOI: 10.1371/journal.pone.0250119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Accepted: 03/30/2021] [Indexed: 12/02/2022] Open
Abstract
This paper studies optimal treatment allocations for two treatment comparisons when the outcome is ordinal and analyzed by a proportional odds cumulative logits model. The variance of the treatment effect estimator is used as optimality criterion. The optimal design is sought so that this variance is minimal for a given total sample size or a given budget, meaning that the power for the test on treatment effect is maximal, or it is sought so that a required power level is achieved at a minimal total sample size or budget. Results are presented for three, five and seven ordered response categories, three treatment effect sizes and a skewed, bell-shaped or polarized distribution of the response probabilities. The optimal proportion subjects in the intervention condition decreases with the number of response categories and the costs for the intervention relative to those for the control. The relation between the optimal proportion and effect size depends on the distribution of the response probabilities. The widely used balanced design is not always the most efficient; its efficiency as compared to the optimal design decreases with increasing cost ratio. The optimal design is highly robust to misspecification of the response probabilities and treatment effect size. The optimal design methodology is illustrated using two pharmaceutical examples. A Shiny app is available to find the optimal treatment allocation, to evaluate the efficiency of the balanced design and to study the relation between budget or sample size and power.
Collapse
|
5
|
van Breukelen GJP, Candel MJJM. Efficient design of cluster randomized trials with treatment-dependent costs and treatment-dependent unknown variances. Stat Med 2018; 37:3027-3046. [PMID: 29888393 PMCID: PMC6120518 DOI: 10.1002/sim.7824] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Revised: 03/23/2018] [Accepted: 04/19/2018] [Indexed: 11/30/2022]
Abstract
Cluster randomized trials evaluate the effect of a treatment on persons nested within clusters, where treatment is randomly assigned to clusters. Current equations for the optimal sample size at the cluster and person level assume that the outcome variances and/or the study costs are known and homogeneous between treatment arms. This paper presents efficient yet robust designs for cluster randomized trials with treatment‐dependent costs and treatment‐dependent unknown variances, and compares these with 2 practical designs. First, the maximin design (MMD) is derived, which maximizes the minimum efficiency (minimizes the maximum sampling variance) of the treatment effect estimator over a range of treatment‐to‐control variance ratios. The MMD is then compared with the optimal design for homogeneous variances and costs (balanced design), and with that for homogeneous variances and treatment‐dependent costs (cost‐considered design). The results show that the balanced design is the MMD if the treatment‐to control cost ratio is the same at both design levels (cluster, person) and within the range for the treatment‐to‐control variance ratio. It still is highly efficient and better than the cost‐considered design if the cost ratio is within the range for the squared variance ratio. Outside that range, the cost‐considered design is better and highly efficient, but it is not the MMD. An example shows sample size calculation for the MMD, and the computer code (SPSS and R) is provided as supplementary material. The MMD is recommended for trial planning if the study costs are treatment‐dependent and homogeneity of variances cannot be assumed.
Collapse
Affiliation(s)
- Gerard J P van Breukelen
- Department of Methodology and Statistics, CAPHRI Care and Public Health Research Institute, Maastricht University, PO Box 616, 6200 MD, The Netherlands.,Department of Methodology and Statistics, Graduate School of Psychology and Neuroscience, Maastricht University, PO Box 616, 6200 MD, The Netherlands
| | - Math J J M Candel
- Department of Methodology and Statistics, CAPHRI Care and Public Health Research Institute, Maastricht University, PO Box 616, 6200 MD, The Netherlands
| |
Collapse
|