1
|
Harrall KK, Sauder KA, Glueck DH, Shenkman EA, Muller KE. Using Power Analysis to Choose the Unit of Randomization, Outcome, and Approach for Subgroup Analysis for a Multilevel Randomized Controlled Clinical Trial to Reduce Disparities in Cardiovascular Health. PREVENTION SCIENCE : THE OFFICIAL JOURNAL OF THE SOCIETY FOR PREVENTION RESEARCH 2024:10.1007/s11121-024-01673-y. [PMID: 38767783 DOI: 10.1007/s11121-024-01673-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/11/2024] [Indexed: 05/22/2024]
Abstract
We give examples of three features in the design of randomized controlled clinical trials which can increase power and thus decrease sample size and costs. We consider an example multilevel trial with several levels of clustering. For a fixed number of independent sampling units, we show that power can vary widely with the choice of the level of randomization. We demonstrate that power and interpretability can improve by testing a multivariate outcome rather than an unweighted composite outcome. Finally, we show that using a pooled analytic approach, which analyzes data for all subgroups in a single model, improves power for testing the intervention effect compared to a stratified analysis, which analyzes data for each subgroup in a separate model. The power results are computed for a proposed prevention research study. The trial plans to randomize adults to either telehealth (intervention) or in-person treatment (control) to reduce cardiovascular risk factors. The trial outcomes will be measures of the Essential Eight, a set of scores for cardiovascular health developed by the American Heart Association which can be combined into a single composite score. The proposed trial is a multilevel study, with outcomes measured on participants, participants treated by the same provider, providers nested within clinics, and clinics nested within hospitals. Investigators suspect that the intervention effect will be greater in rural participants, who live farther from clinics than urban participants. The results use published, exact analytic methods for power calculations with continuous outcomes. We provide example code for power analyses using validated software.
Collapse
Affiliation(s)
- Kylie K Harrall
- Department of Health Outcomes and Biomedical Informatics, University of Florida College of Medicine, 2004 Mowry Road, Gainesville, 32606, FL, USA.
| | - Katherine A Sauder
- Department of Implementation Science, Wake Forest University School of Medicine, 475 Vine Street, Winston-Salem, 27101, NC, USA
| | - Deborah H Glueck
- Department of Pediatrics, University of Colorado School of Medicine, 13123 E. 16th Ave., Aurora, 80045, CO, USA
| | - Elizabeth A Shenkman
- Department of Health Outcomes and Biomedical Informatics, University of Florida College of Medicine, 2004 Mowry Road, Gainesville, 32606, FL, USA
| | - Keith E Muller
- Department of Health Outcomes and Biomedical Informatics, University of Florida College of Medicine, 2004 Mowry Road, Gainesville, 32606, FL, USA
| |
Collapse
|
2
|
Blette BS, Halpern SD, Li F, Harhay MO. Assessing treatment effect heterogeneity in the presence of missing effect modifier data in cluster-randomized trials. Stat Methods Med Res 2024; 33:909-927. [PMID: 38567439 PMCID: PMC11041086 DOI: 10.1177/09622802241242323] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Understanding whether and how treatment effects vary across subgroups is crucial to inform clinical practice and recommendations. Accordingly, the assessment of heterogeneous treatment effects based on pre-specified potential effect modifiers has become a common goal in modern randomized trials. However, when one or more potential effect modifiers are missing, complete-case analysis may lead to bias and under-coverage. While statistical methods for handling missing data have been proposed and compared for individually randomized trials with missing effect modifier data, few guidelines exist for the cluster-randomized setting, where intracluster correlations in the effect modifiers, outcomes, or even missingness mechanisms may introduce further threats to accurate assessment of heterogeneous treatment effect. In this article, the performance of several missing data methods are compared through a simulation study of cluster-randomized trials with continuous outcome and missing binary effect modifier data, and further illustrated using real data from the Work, Family, and Health Study. Our results suggest that multilevel multiple imputation and Bayesian multilevel multiple imputation have better performance than other available methods, and that Bayesian multilevel multiple imputation has lower bias and closer to nominal coverage than standard multilevel multiple imputation when there are model specification or compatibility issues.
Collapse
Affiliation(s)
- Bryan S Blette
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Scott D Halpern
- Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Clinical Trials Methods and Outcomes Lab, PAIR (Palliative and Advanced Illness Research) Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Center for Methods in Implementation and Prevention Science, Yale University, New Haven, CT, USA
| | - Michael O Harhay
- Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Clinical Trials Methods and Outcomes Lab, PAIR (Palliative and Advanced Illness Research) Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
3
|
Wang X, Chen X, Goldfeld KS, Taljaard M, Li F. Sample size and power calculation for testing treatment effect heterogeneity in cluster randomized crossover designs. Stat Methods Med Res 2024:9622802241247736. [PMID: 38689556 DOI: 10.1177/09622802241247736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
The cluster randomized crossover design has been proposed to improve efficiency over the traditional parallel-arm cluster randomized design. While statistical methods have been developed for designing cluster randomized crossover trials, they have exclusively focused on testing the overall average treatment effect, with little attention to differential treatment effects across subpopulations. Recently, interest has grown in understanding whether treatment effects may vary across pre-specified patient subpopulations, such as those defined by demographic or clinical characteristics. In this article, we consider the two-treatment two-period cluster randomized crossover design under either a cross-sectional or closed-cohort sampling scheme, where it is of interest to detect the heterogeneity of treatment effect via an interaction test. Assuming a patterned correlation structure for both the covariate and the outcome, we derive new sample size formulas for testing the heterogeneity of treatment effect with continuous outcomes based on linear mixed models. Our formulas also address unequal cluster sizes and therefore allow us to analytically assess the impact of unequal cluster sizes on the power of the interaction test in cluster randomized crossover designs. We conduct simulations to confirm the accuracy of the proposed methods, and illustrate their application in two real cluster randomized crossover trials.
Collapse
Affiliation(s)
- Xueqi Wang
- Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
| | - Xinyuan Chen
- Department of Mathematics and Statistics, Mississippi State University, Mississippi State, MS, USA
| | - Keith S Goldfeld
- Division of Biostatistics, Department of Population Health, NYU Grossman School of Medicine, New York, NY, USA
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, CT, USA
| |
Collapse
|
4
|
Yang C, Berkalieva A, Mazumdar M, Kwon D. Power calculation for detecting interaction effect in cross-sectional stepped-wedge cluster randomized trials: an important tool for disparity research. BMC Med Res Methodol 2024; 24:57. [PMID: 38431550 DOI: 10.1186/s12874-024-02162-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Accepted: 01/25/2024] [Indexed: 03/05/2024] Open
Abstract
BACKGROUND The stepped-wedge cluster randomized trial (SW-CRT) design has become popular in healthcare research. It is an appealing alternative to traditional cluster randomized trials (CRTs) since the burden of logistical issues and ethical problems can be reduced. Several approaches for sample size determination for the overall treatment effect in the SW-CRT have been proposed. However, in certain situations we are interested in examining the heterogeneity in treatment effect (HTE) between groups instead. This is equivalent to testing the interaction effect. An important example includes the aim to reduce racial disparities through healthcare delivery interventions, where the focus is the interaction between the intervention and race. Sample size determination and power calculation for detecting an interaction effect between the intervention status variable and a key covariate in the SW-CRT study has not been proposed yet for binary outcomes. METHODS We utilize the generalized estimating equation (GEE) method for detecting the heterogeneity in treatment effect (HTE). The variance of the estimated interaction effect is approximated based on the GEE method for the marginal models. The power is calculated based on the two-sided Wald test. The Kauermann and Carroll (KC) and the Mancl and DeRouen (MD) methods along with GEE (GEE-KC and GEE-MD) are considered as bias-correction methods. RESULTS Among three approaches, GEE has the largest simulated power and GEE-MD has the smallest simulated power. Given cluster size of 120, GEE has over 80% statistical power. When we have a balanced binary covariate (50%), simulated power increases compared to an unbalanced binary covariate (30%). With intermediate effect size of HTE, only cluster sizes of 100 and 120 have more than 80% power using GEE for both correlation structures. With large effect size of HTE, when cluster size is at least 60, all three approaches have more than 80% power. When we compare an increase in cluster size and increase in the number of clusters based on simulated power, the latter has a slight gain in power. When the cluster size changes from 20 to 40 with 20 clusters, power increases from 53.1% to 82.1% for GEE; 50.6% to 79.7% for GEE-KC; and 48.1% to 77.1% for GEE-MD. When the number of clusters changes from 20 to 40 with cluster size of 20, power increases from 53.1% to 82.1% for GEE; 50.6% to 81% for GEE-KC; and 48.1% to 79.8% for GEE-MD. CONCLUSIONS We propose three approaches for cluster size determination given the number of clusters for detecting the interaction effect in SW-CRT. GEE and GEE-KC have reasonable operating characteristics for both intermediate and large effect size of HTE.
Collapse
Affiliation(s)
- Chen Yang
- Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Institute for Healthcare Delivery Science, Mount Sinai Health System, New York, NY, USA
- Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Asem Berkalieva
- Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Institute for Healthcare Delivery Science, Mount Sinai Health System, New York, NY, USA
- Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Madhu Mazumdar
- Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Institute for Healthcare Delivery Science, Mount Sinai Health System, New York, NY, USA
- Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Deukwoo Kwon
- Division of Clinical and Translational Sciences, Department of Internal Medicine, University of Texas Health Science Center at Houston, Houston, TX, USA.
| |
Collapse
|
5
|
Li F, Chen X, Tian Z, Wang R, Heagerty PJ. Planning stepped wedge cluster randomized trials to detect treatment effect heterogeneity. Stat Med 2024; 43:890-911. [PMID: 38115805 DOI: 10.1002/sim.9990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 09/22/2023] [Accepted: 11/30/2023] [Indexed: 12/21/2023]
Abstract
Stepped wedge design is a popular research design that enables a rigorous evaluation of candidate interventions by using a staggered cluster randomization strategy. While analytical methods were developed for designing stepped wedge trials, the prior focus has been solely on testing for the average treatment effect. With a growing interest on formal evaluation of the heterogeneity of treatment effects across patient subpopulations, trial planning efforts need appropriate methods to accurately identify sample sizes or design configurations that can generate evidence for both the average treatment effect and variations in subgroup treatment effects. To fill in that important gap, this article derives novel variance formulas for confirmatory analyses of treatment effect heterogeneity, that are applicable to both cross-sectional and closed-cohort stepped wedge designs. We additionally point out that the same framework can be used for more efficient average treatment effect analyses via covariate adjustment, and allows the use of familiar power formulas for average treatment effect analyses to proceed. Our results further sheds light on optimal design allocations of clusters to maximize the weighted precision for assessing both the average and heterogeneous treatment effects. We apply the new methods to the Lumbar Imaging with Reporting of Epidemiology Trial, and carry out a simulation study to validate our new methods.
Collapse
Affiliation(s)
- Fan Li
- Department of Biostatistics, Yale University School of Public Health, New Haven, Connecticut, USA
- Center for Methods in Implementation and Prevention Science, Yale University School of Public Health, New Haven, Connecticut, USA
| | - Xinyuan Chen
- Department of Mathematics and Statistics, Mississippi State University, Mississippi State, Mississippi, USA
| | - Zizhong Tian
- Department of Public Health Sciences, Pennsylvania State University College of Medicine, Hershey, Pennsylvania, USA
| | - Rui Wang
- Department of Population Medicine, Harvard Pilgrim Health Care Institute and Harvard Medical School, Boston, Massachusetts, USA
- Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, Massachusetts, USA
| | - Patrick J Heagerty
- Department of Biostatistics, University of Washington, Seattle, Washington, USA
| |
Collapse
|
6
|
Tong G, Tong J, Jiang Y, Esserman D, Harhay MO, Warren JL. Hierarchical Bayesian modeling of heterogeneous outcome variance in cluster randomized trials. Clin Trials 2024:17407745231222018. [PMID: 38197388 DOI: 10.1177/17407745231222018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/11/2024]
Abstract
BACKGROUND Heterogeneous outcome correlations across treatment arms and clusters have been increasingly acknowledged in cluster randomized trials with binary endpoints, where analytical methods have been developed to study such heterogeneity. However, cluster-specific outcome variances and correlations have yet to be studied for cluster randomized trials with continuous outcomes. METHODS This article proposes models fitted in the Bayesian setting with hierarchical variance structure to quantify heterogeneous variances across clusters and explain it with cluster-level covariates when the outcome is continuous. The models can also be extended to analyzing heterogeneous variances in individually randomized group treatment trials, with arm-specific cluster-level covariates, or in partially nested designs. Simulation studies are carried out to validate the performance of the newly introduced models across different settings. RESULTS Simulations showed that overall the newly introduced models have good performance, reporting low bias and approximately 95% coverage for the intraclass correlation coefficients and regression parameters in the variance model. When variances are heterogeneous, our proposed models had improved model fit over models with homogeneous variances. When used to analyze data from the Kerala Diabetes Prevention Program study, our models identified heterogeneous variances and intraclass correlation coefficients across clusters and examined cluster-level characteristics associated with such heterogeneity. CONCLUSION We proposed new hierarchical Bayesian variance models to accommodate cluster-specific variances in cluster randomized trials. The newly developed methods inform the understanding of how an intervention strategy is implemented and disseminated differently across clusters and can help improve future trial design.
Collapse
Affiliation(s)
- Guangyu Tong
- Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, CT, USA
| | - Jiaqi Tong
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, CT, USA
| | - Yi Jiang
- Department of Biostatistics, Penn State College of Medicine, Hershey, PA, USA
| | - Denise Esserman
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Yale Center for Analytical Sciences, Yale School of Public Health, New Haven, CT, USA
| | - Michael O Harhay
- Clinical Trials Methods and Outcomes Lab, Palliative and Advanced Illness Research (PAIR) Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Biostatistics, Epidemiology & Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Joshua L Warren
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
| |
Collapse
|
7
|
Maleyeff L, Wang R, Haneuse S, Li F. Sample size requirements for testing treatment effect heterogeneity in cluster randomized trials with binary outcomes. Stat Med 2023; 42:5054-5083. [PMID: 37974475 PMCID: PMC10659142 DOI: 10.1002/sim.9901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 08/24/2023] [Accepted: 09/01/2023] [Indexed: 11/19/2023]
Abstract
Cluster randomized trials (CRTs) refer to a popular class of experiments in which randomization is carried out at the group level. While methods have been developed for planning CRTs to study the average treatment effect, and more recently, to study the heterogeneous treatment effect, the development for the latter objective has currently been limited to a continuous outcome. Despite the prevalence of binary outcomes in CRTs, determining the necessary sample size and statistical power for detecting differential treatment effects in CRTs with a binary outcome remain unclear. To address this methodological gap, we develop sample size procedures for testing treatment effect heterogeneity in two-level CRTs under a generalized linear mixed model. Closed-form sample size expressions are derived for a binary effect modifier, and in addition, a computationally efficient Monte Carlo approach is developed for a continuous effect modifier. Extensions to multiple effect modifiers are also discussed. We conduct simulations to examine the accuracy of the proposed sample size methods. We present several numerical illustrations to elucidate features of the proposed formulas and to compare our method to the approximate sample size calculation under a linear mixed model. Finally, we use data from the Strategies and Opportunities to Stop Colon Cancer in Priority Populations (STOP CRC) CRT to illustrate the proposed sample size procedure for testing treatment effect heterogeneity.
Collapse
Affiliation(s)
- Lara Maleyeff
- Department of Biostatistics, Harvard T. H. Chan School of Public Health, Massachusetts, USA
| | - Rui Wang
- Department of Biostatistics, Harvard T. H. Chan School of Public Health, Massachusetts, USA
- Department of Population Medicine, Harvard Pilgrim Health Care Institute and Harvard Medical School Boston, Massachusetts, USA
| | - Sebastien Haneuse
- Department of Biostatistics, Harvard T. H. Chan School of Public Health, Massachusetts, USA
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, Connecticut, United States
| |
Collapse
|
8
|
Li F, Chen X, Tian Z, Esserman D, Heagerty PJ, Wang R. Designing three-level cluster randomized trials to assess treatment effect heterogeneity. Biostatistics 2023; 24:833-849. [PMID: 35861621 PMCID: PMC10583727 DOI: 10.1093/biostatistics/kxac026] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 06/09/2022] [Accepted: 06/27/2022] [Indexed: 11/14/2022] Open
Abstract
Cluster randomized trials often exhibit a three-level structure with participants nested in subclusters such as health care providers, and subclusters nested in clusters such as clinics. While the average treatment effect has been the primary focus in planning three-level randomized trials, interest is growing in understanding whether the treatment effect varies among prespecified patient subpopulations, such as those defined by demographics or baseline clinical characteristics. In this article, we derive novel analytical design formulas based on the asymptotic covariance matrix for powering confirmatory analyses of treatment effect heterogeneity in three-level trials, that are broadly applicable to the evaluation of cluster-level, subcluster-level, and participant-level effect modifiers and to designs where randomization can be carried out at any level. We characterize a nested exchangeable correlation structure for both the effect modifier and the outcome conditional on the effect modifier, and generate new insights from a study design perspective for conducting analyses of treatment effect heterogeneity based on a linear mixed analysis of covariance model. A simulation study is conducted to validate our new methods and two real-world trial examples are used for illustrations.
Collapse
Affiliation(s)
- Fan Li
- Department of Biostatistics, Yale University School of Public Health, New Haven, CT 06510, USA
| | - Xinyuan Chen
- Department of Mathematics and Statistics, Mississippi State University, MS 39762, USA
| | - Zizhong Tian
- Division of Biostatistics and Bioinformatics, Department of Public Health Sciences, Pennsylvania State University, Hershey, PA 17033, USA
| | - Denise Esserman
- Department of Biostatistics, Yale University School of Public Health, New Haven, CT 06510, USA
| | - Patrick J Heagerty
- Department of Biostatistics, University of Washington, Seattle, WA 98195, USA
| | - Rui Wang
- Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, MA 02115, USA and Department of Population Medicine, Harvard Pilgrim Health Care Institute and Harvard Medical School, Boston, MA 02215, USA
| |
Collapse
|
9
|
Wang X, Goldfeld KS, Taljaard M, Li F. Sample Size Requirements to Test Subgroup-Specific Treatment Effects in Cluster-Randomized Trials. PREVENTION SCIENCE : THE OFFICIAL JOURNAL OF THE SOCIETY FOR PREVENTION RESEARCH 2023:10.1007/s11121-023-01590-6. [PMID: 37816835 PMCID: PMC11004667 DOI: 10.1007/s11121-023-01590-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2023] [Indexed: 10/12/2023]
Abstract
Cluster-randomized trials (CRTs) often allocate intact clusters of participants to treatment or control conditions and are increasingly used to evaluate healthcare delivery interventions. While previous studies have developed sample size methods for testing confirmatory hypotheses of treatment effect heterogeneity in CRTs (i.e., targeting the difference between subgroup-specific treatment effects), sample size methods for testing the subgroup-specific treatment effects themselves have not received adequate attention-despite a rising interest in health equity considerations in CRTs. In this article, we develop formal methods for sample size and power analyses for testing subgroup-specific treatment effects in parallel-arm CRTs with a continuous outcome and a binary subgroup variable. We point out that the variances of the subgroup-specific treatment effect estimators and their covariance are given by weighted averages of the variance of the overall average treatment effect estimator and the variance of the heterogeneous treatment effect estimator. This analytical insight facilitates an explicit characterization of the requirements for both the omnibus test and the intersection-union test to achieve the desired level of power. Generalizations to allow for subgroup-specific variance structures are also discussed. We report on a simulation study to validate the proposed sample size methods and demonstrate that the empirical power corresponds well with the predicted power for both tests. The design and setting of the Umea Dementia and Exercise (UMDEX) CRT in older adults are used to illustrate our sample size methods.
Collapse
Affiliation(s)
- Xueqi Wang
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Section of Geriatrics, Department of Internal Medicine, Yale School of Medicine, New Haven, CT, USA
| | - Keith S Goldfeld
- Division of Biostatistics, Department of Population Health, NYU Grossman School of Medicine, New York, NY, USA
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, Canada
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA.
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, Suite 200, Room 229, 135 College Street, New Haven, CT, 06510, USA.
| |
Collapse
|
10
|
Ouyang Y, Hemming K, Li F, Taljaard M. Estimating intra-cluster correlation coefficients for planning longitudinal cluster randomized trials: a tutorial. Int J Epidemiol 2023; 52:1634-1647. [PMID: 37196320 PMCID: PMC10555741 DOI: 10.1093/ije/dyad062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2022] [Accepted: 04/26/2023] [Indexed: 05/19/2023] Open
Abstract
It is well-known that designing a cluster randomized trial (CRT) requires an advance estimate of the intra-cluster correlation coefficient (ICC). In the case of longitudinal CRTs, where outcomes are assessed repeatedly in each cluster over time, estimates for more complex correlation structures are required. Three common types of correlation structures for longitudinal CRTs are exchangeable, nested/block exchangeable and exponential decay correlations-the latter two allow the strength of the correlation to weaken over time. Determining sample sizes under these latter two structures requires advance specification of the within-period ICC and cluster autocorrelation coefficient as well as the intra-individual autocorrelation coefficient in the case of a cohort design. How to estimate these coefficients is a common challenge for investigators. When appropriate estimates from previously published longitudinal CRTs are not available, one possibility is to re-analyse data from an available trial dataset or to access observational data to estimate these parameters in advance of a trial. In this tutorial, we demonstrate how to estimate correlation parameters under these correlation structures for continuous and binary outcomes. We first introduce the correlation structures and their underlying model assumptions under a mixed-effects regression framework. With practical advice for implementation, we then demonstrate how the correlation parameters can be estimated using examples and we provide programming code in R, SAS, and Stata. An Rshiny app is available that allows investigators to upload an existing dataset and obtain the estimated correlation parameters. We conclude by identifying some gaps in the literature.
Collapse
Affiliation(s)
- Yongdong Ouyang
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| | - Karla Hemming
- Institute of Applied Health Research, The University of Birmingham, Birmingham, UK
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, CT, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, CT, USA
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada
- School of Epidemiology and Public Health, University of Ottawa, Ottawa, ON, Canada
| |
Collapse
|
11
|
Ryan MM, Esserman D, Li F. Maximin optimal cluster randomized designs for assessing treatment effect heterogeneity. Stat Med 2023; 42:3764-3785. [PMID: 37339777 PMCID: PMC10510425 DOI: 10.1002/sim.9830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 04/19/2023] [Accepted: 05/29/2023] [Indexed: 06/22/2023]
Abstract
Cluster randomized trials (CRTs) are studies where treatment is randomized at the cluster level but outcomes are typically collected at the individual level. When CRTs are employed in pragmatic settings, baseline population characteristics may moderate treatment effects, leading to what is known as heterogeneous treatment effects (HTEs). Pre-specified, hypothesis-driven HTE analyses in CRTs can enable an understanding of how interventions may impact subpopulation outcomes. While closed-form sample size formulas have recently been proposed, assuming known intracluster correlation coefficients (ICCs) for both the covariate and outcome, guidance on optimal cluster randomized designs to ensure maximum power with pre-specified HTE analyses has not yet been developed. We derive new design formulas to determine the cluster size and number of clusters to achieve the locally optimal design (LOD) that minimizes variance for estimating the HTE parameter given a budget constraint. Given the LODs are based on covariate and outcome-ICC values that are usually unknown, we further develop the maximin design for assessing HTE, identifying the combination of design resources that maximize the relative efficiency of the HTE analysis in the worst case scenario. In addition, given the analysis of the average treatment effect is often of primary interest, we also establish optimal designs to accommodate multiple objectives by combining considerations for studying both the average and heterogeneous treatment effects. We illustrate our methods using the context of the Kerala Diabetes Prevention Program CRT, and provide an R Shiny app to facilitate calculation of optimal designs under a wide range of design parameters.
Collapse
Affiliation(s)
- Mary M. Ryan
- Department of Biostatistics, Yale School of Public Health, Connecticut, USA
- Yale Center for Analytical Sciences, Yale School of Public Health, Connecticut, USA
| | - Denise Esserman
- Department of Biostatistics, Yale School of Public Health, Connecticut, USA
- Yale Center for Analytical Sciences, Yale School of Public Health, Connecticut, USA
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, Connecticut, USA
- Yale Center for Analytical Sciences, Yale School of Public Health, Connecticut, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, Connecticut, USA
| |
Collapse
|
12
|
Tong G, Taljaard M, Li F. Sample size considerations for assessing treatment effect heterogeneity in randomized trials with heterogeneous intracluster correlations and variances. Stat Med 2023; 42:3392-3412. [PMID: 37316956 DOI: 10.1002/sim.9811] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Revised: 02/08/2023] [Accepted: 05/19/2023] [Indexed: 06/16/2023]
Abstract
An important consideration in the design and analysis of randomized trials is the need to account for outcome observations being positively correlated within groups or clusters. Two notable types of designs with this consideration are individually randomized group treatment trials and cluster randomized trials. While sample size methods for testing the average treatment effect are available for both types of designs, methods for detecting treatment effect modification are relatively limited. In this article, we present new sample size formulas for testing treatment effect modification based on either a univariate or multivariate effect modifier in both individually randomized group treatment and cluster randomized trials with a continuous outcome but any types of effect modifier, while accounting for differences across study arms in the outcome variance, outcome intracluster correlation coefficient (ICC) and the cluster size. We consider cases where the effect modifier can be measured at either the individual level or cluster level, and with a univariate effect modifier, our closed-form sample size expressions provide insights into the optimal allocation of groups or clusters to maximize design efficiency. Overall, our results show that the required sample size for testing treatment effect heterogeneity with an individual-level effect modifier can be affected by unequal ICCs and variances between arms, and accounting for such between-arm heterogeneity can lead to more accurate sample size determination. We use simulations to validate our sample size formulas and illustrate their application in the context of two real trials: an individually randomized group treatment trial (the AWARE study) and a cluster randomized trial (the K-DPP study).
Collapse
Affiliation(s)
- Guangyu Tong
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, Canada
- School of Epidemiology and Public Heath, University of Ottawa, Ottawa, Canada
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, Connecticut, USA
| |
Collapse
|
13
|
Tong J, Li F, Harhay MO, Tong G. Accounting for expected attrition in the planning of cluster randomized trials for assessing treatment effect heterogeneity. BMC Med Res Methodol 2023; 23:85. [PMID: 37024809 PMCID: PMC10077680 DOI: 10.1186/s12874-023-01887-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 03/10/2023] [Indexed: 04/08/2023] Open
Abstract
BACKGROUND Detecting treatment effect heterogeneity is an important objective in cluster randomized trials and implementation research. While sample size procedures for testing the average treatment effect accounting for participant attrition assuming missing completely at random or missing at random have been previously developed, the impact of attrition on the power for detecting heterogeneous treatment effects in cluster randomized trials remains unknown. METHODS We provide a sample size formula for testing for a heterogeneous treatment effect assuming the outcome is missing completely at random. We also propose an efficient Monte Carlo sample size procedure for assessing heterogeneous treatment effect assuming covariate-dependent outcome missingness (missing at random). We compare our sample size methods with the direct inflation method that divides the estimated sample size by the mean follow-up rate. We also evaluate our methods through simulation studies and illustrate them with a real-world example. RESULTS Simulation results show that our proposed sample size methods under both missing completely at random and missing at random provide sufficient power for assessing heterogeneous treatment effect. The proposed sample size methods lead to more accurate sample size estimates than the direct inflation method when the missingness rate is high (e.g., ≥ 30%). Moreover, sample size estimation under both missing completely at random and missing at random is sensitive to the missingness rate, but not sensitive to the intracluster correlation coefficient among the missingness indicators. CONCLUSION Our new sample size methods can assist in planning cluster randomized trials that plan to assess a heterogeneous treatment effect and participant attrition is expected to occur.
Collapse
Affiliation(s)
- Jiaqi Tong
- Department of Biostatistics, Yale School of Public Health, 135 College Street, CT, New Haven, 06510, USA
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, 135 College Street, CT, New Haven, 06510, USA
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, CT, USA
| | - Michael O Harhay
- Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | - Guangyu Tong
- Department of Biostatistics, Yale School of Public Health, 135 College Street, CT, New Haven, 06510, USA.
- Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, CT, USA.
| |
Collapse
|
14
|
Kahan BC, Li F, Copas AJ, Harhay MO. Estimands in cluster-randomized trials: choosing analyses that answer the right question. Int J Epidemiol 2023; 52:107-118. [PMID: 35834775 PMCID: PMC9908044 DOI: 10.1093/ije/dyac131] [Citation(s) in RCA: 22] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 06/07/2022] [Indexed: 01/09/2023] Open
Abstract
BACKGROUND Cluster-randomized trials (CRTs) involve randomizing groups of individuals (e.g. hospitals, schools or villages) to different interventions. Various approaches exist for analysing CRTs but there has been little discussion around the treatment effects (estimands) targeted by each. METHODS We describe the different estimands that can be addressed through CRTs and demonstrate how choices between different analytic approaches can impact the interpretation of results by fundamentally changing the question being asked, or, equivalently, the target estimand. RESULTS CRTs can address either the participant-average treatment effect (the average treatment effect across participants) or the cluster-average treatment effect (the average treatment effect across clusters). These two estimands can differ when participant outcomes or the treatment effect depends on the cluster size (referred to as 'informative cluster size'), which can occur for reasons such as differences in staffing levels or types of participants between small and large clusters. Furthermore, common estimators, such as mixed-effects models or generalized estimating equations with an exchangeable working correlation structure, can produce biased estimates for both the participant-average and cluster-average treatment effects when cluster size is informative. We describe alternative estimators (independence estimating equations and cluster-level analyses) that are unbiased for CRTs even when informative cluster size is present. CONCLUSION We conclude that careful specification of the estimand at the outset can ensure that the study question being addressed is clear and relevant, and, in turn, that the selected estimator provides an unbiased estimate of the desired quantity.
Collapse
Affiliation(s)
- Brennan C Kahan
- MRC Clinical Trials Unit at UCL, Institute of Clinical Trials and Methodology, London, UK
| | - Fan Li
- Department of Biostatistics, Yale University School of Public Health, New Haven, CT, USA
- Center for Methods in Implementation and Prevention Science, Yale University School of Public Health, New Haven, CT, USA
| | - Andrew J Copas
- MRC Clinical Trials Unit at UCL, Institute of Clinical Trials and Methodology, London, UK
| | - Michael O Harhay
- Clinical Trials Methods and Outcomes Lab, PAIR (Palliative and Advanced Illness Research) Center, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
- Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
15
|
Li W, Konstantopoulos S. Power Analysis for Moderator Effects in Longitudinal Cluster Randomized Designs. EDUCATIONAL AND PSYCHOLOGICAL MEASUREMENT 2023; 83:116-145. [PMID: 36601251 PMCID: PMC9806516 DOI: 10.1177/00131644221077359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
Cluster randomized control trials often incorporate a longitudinal component where, for example, students are followed over time and student outcomes are measured repeatedly. Besides examining how intervention effects induce changes in outcomes, researchers are sometimes also interested in exploring whether intervention effects on outcomes are modified by moderator variables at the individual (e.g., gender, race/ethnicity) and/or the cluster level (e.g., school urbanicity) over time. This study provides methods for statistical power analysis of moderator effects in two- and three-level longitudinal cluster randomized designs. Power computations take into account clustering effects, the number of measurement occasions, the impact of sample sizes at different levels, covariates effects, and the variance of the moderator variable. Illustrative examples are offered to demonstrate the applicability of the methods.
Collapse
Affiliation(s)
- Wei Li
- University of Florida, Gainesville,
USA
| | | |
Collapse
|
16
|
Nicholls SG, Al‐Jaishi AA, Niznick H, Carroll K, Madani MT, Peak KD, Madani L, Nevins P, Adisso L, Li F, Weijer C, Mitchell SL, Welch V, Quiñones AR, Taljaard M. Health equity considerations in pragmatic trials in Alzheimer's and dementia disease: Results from a methodological review. ALZHEIMER'S & DEMENTIA (AMSTERDAM, NETHERLANDS) 2023; 15:e12392. [PMID: 36777091 PMCID: PMC9899766 DOI: 10.1002/dad2.12392] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Revised: 10/20/2022] [Accepted: 12/06/2022] [Indexed: 02/07/2023]
Abstract
Introduction To improve dementia care delivery for persons across all backgrounds, it is imperative that health equity is integrated into pragmatic trials. Methods We reviewed 62 pragmatic trials of people with dementia published 2014 to 2019. We assessed health equity in the objectives; design, conduct, analysis; and reporting using PROGRESS-Plus which stands for Place of residence, Race/ethnicity, Occupation, Gender/sex, Religion, Education, Socioeconomic status, Social capital, and other factors such as age and disability. Results Two (3.2%) trials incorporated equity considerations into their objectives; nine (14.5%) engaged with communities; 4 (6.5%) described steps to increase enrollment from equity-relevant groups. Almost all trials (59, 95.2%) assessed baseline balance for at least one PROGRESS-Plus characteristic, but only 10 (16.1%) presented subgroup analyses across such characteristics. Differential recruitment, attrition, implementation, adherence, and applicability across PROGRESS-Plus were seldom discussed. Discussion Ongoing and future pragmatic trials should more rigorously integrate equity considerations in their design, conduct, and reporting. Highlights Few pragmatic trials are explicitly designed to inform equity-relevant objectives.Few pragmatic trials take steps to increase enrollment from equity-relevant groups.Disaggregated results across equity-relevant groups are seldom reported.Adherence to existing tools (e.g., IMPACT Best Practices, CONSORT-Equity) is key.
Collapse
Affiliation(s)
- Stuart G. Nicholls
- Clinical Epidemiology ProgramOttawa Hospital Research InstituteOttawaOntarioCanada
| | - Ahmed A. Al‐Jaishi
- Clinical Epidemiology ProgramOttawa Hospital Research InstituteOttawaOntarioCanada
| | - Harrison Niznick
- Clinical Epidemiology ProgramOttawa Hospital Research InstituteOttawaOntarioCanada
| | - Kelly Carroll
- Clinical Epidemiology ProgramOttawa Hospital Research InstituteOttawaOntarioCanada
| | | | - Katherine D. Peak
- Department of Family MedicineOregon Health & Science UniversityPortlandOregonUSA
| | - Leen Madani
- Bruyère Research Institute and, University of OttawaOttawaOntarioCanada
| | - Pascale Nevins
- Clinical Epidemiology ProgramOttawa Hospital Research InstituteOttawaOntarioCanada
| | - Lionel Adisso
- VITAM – Centre de recherche en santé durableDepartment of Social and Preventive MedicineFaculty of MedicineUniversité LavalQuebecCanada
| | - Fan Li
- Department of BiostatisticsYale University School of Public HealthNew HavenConnecticutUSA
| | - Charles Weijer
- Departments of MedicineEpidemiology & Biostatistics, and PhilosophyWestern UniversityLondonOntarioCanada
| | - Susan L. Mitchell
- Hebrew SeniorLife, Marcus Institute for Aging ResearchBostonMassachusettsUSA
| | - Vivian Welch
- Bruyère Research Institute andSchool of Epidemiology and Public HealthUniversity of OttawaOttawaOntarioCanada
| | - Ana R. Quiñones
- Department of Family MedicineOregon Health & Science UniversityPortlandOregonUSA
| | - Monica Taljaard
- Clinical Epidemiology ProgramOttawa Hospital Research Institute andSchool of Epidemiology and Public HealthUniversity of OttawaOttawaOntarioCanada
| |
Collapse
|
17
|
Jago R, Salway R, House D, Beets M, Lubans DR, Woods C, de Vocht F. Rethinking children's physical activity interventions at school: A new context-specific approach. Front Public Health 2023; 11:1149883. [PMID: 37124783 PMCID: PMC10133698 DOI: 10.3389/fpubh.2023.1149883] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Accepted: 03/21/2023] [Indexed: 05/02/2023] Open
Abstract
Physical activity is important for children's health. However, evidence suggests that many children and adults do not meet international physical activity recommendations. Current school-based interventions have had limited effect on physical activity and alternative approaches are needed. Context, which includes school setting, ethos, staff, and sociodemographic factors, is a key and largely ignored contributing factor to school-based physical activity intervention effectiveness, impacting in several interacting ways. Conceptualization Current programs focus on tightly-constructed content that ignores the context in which the program will be delivered, thereby limiting effectiveness. We propose a move away from uniform interventions that maximize internal validity toward a flexible approach that enables schools to tailor content to their specific context. Evaluation designs Evaluation of context-specific interventions should explicitly consider context. This is challenging in cluster randomized controlled trial designs. Thus, alternative designs such as natural experiment and stepped-wedge designs warrant further consideration. Primary outcome A collective focus on average minutes of moderate-to-vigorous intensity physical activity may not always be the most appropriate choice. A wider range of outcomes may improve children's physical activity and health in the long-term. In this paper, we argue that greater consideration of school context is key in the design and analysis of school-based physical activity interventions and may help overcome existing limitations in the design of effective interventions and thus progress the field. While this focus on context-specific interventions and evaluation is untested, we hope to stimulate debate of the key issues to improve future physical activity intervention development and implementation.
Collapse
Affiliation(s)
- Russell Jago
- Centre for Exercise, Nutrition and Health Sciences, School for Policy Studies, University of Bristol, Bristol, United Kingdom
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
- The National Institute for Health Research, Applied Research Collaboration West (NIHR ARC West), University Hospitals Bristol and Weston NHS Foundation Trust, Bristol, United Kingdom
- NIHR Bristol Biomedical Research Centre, University Hospitals Bristol and Weston NHS Foundation Trust and University of Bristol, Bristol, United Kingdom
- *Correspondence: Russell Jago,
| | - Ruth Salway
- Centre for Exercise, Nutrition and Health Sciences, School for Policy Studies, University of Bristol, Bristol, United Kingdom
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
| | - Danielle House
- Centre for Exercise, Nutrition and Health Sciences, School for Policy Studies, University of Bristol, Bristol, United Kingdom
| | - Michael Beets
- Arnold School of Public Health, University of South Carolina, Columbia, SC, United States
| | - David Revalds Lubans
- Centre for Active Living and Learning, College of Human and Social Futures, University of Newcastle, Callaghan, NSW, Australia
- Hunter Medical Research Institute, Newcastle, NSW, Australia
- Faculty of Sport and Health Sciences, University of Jyväskylä, Jyväskylä, Finland
| | - Catherine Woods
- Physical Activity for Health Research Centre, Health Research Institute, University of Limerick, Limerick, Ireland
| | - Frank de Vocht
- Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, United Kingdom
- The National Institute for Health Research, Applied Research Collaboration West (NIHR ARC West), University Hospitals Bristol and Weston NHS Foundation Trust, Bristol, United Kingdom
| |
Collapse
|
18
|
Copas A, Murray DM, Roberts JN. Thirteenth annual UPenn conference on statistical issues in clinical trials: Cluster-randomized clinical trials-opportunities and challenges (afternoon panel session). Clin Trials 2022; 19:422-431. [PMID: 35924779 DOI: 10.1177/17407745221101284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
| | | | - Jeffrey N Roberts
- U.S. Food & Drug Administration, Silver Spring, MD, USA.,Merck & Co., Inc., Rahway, NJ, USA
| |
Collapse
|
19
|
Tian Z, Esserman D, Tong G, Blaha O, Dziura J, Peduzzi P, Li F. Sample size calculation in hierarchical 2×2 factorial trials with unequal cluster sizes. Stat Med 2022; 41:645-664. [PMID: 34978097 PMCID: PMC8962918 DOI: 10.1002/sim.9284] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Revised: 11/19/2021] [Accepted: 11/25/2021] [Indexed: 11/08/2022]
Abstract
Motivated by a suicide prevention trial with hierarchical treatment allocation (cluster-level and individual-level treatments), we address the sample size requirements for testing the treatment effects as well as their interaction. We assume a linear mixed model, within which two types of treatment effect estimands (controlled effect and marginal effect) are defined. For each null hypothesis corresponding to an estimand, we derive sample size formulas based on large-sample z-approximation, and provide finite-sample modifications based on a t-approximation. We relax the equal cluster size assumption and express the sample size formulas as functions of the mean and coefficient of variation of cluster sizes. We show that the sample size requirement for testing the controlled effect of the cluster-level treatment is more sensitive to cluster size variability than that for testing the controlled effect of the individual-level treatment; the same observation holds for testing the marginal effects. In addition, we show that the sample size for testing the interaction effect is proportional to that for testing the controlled or the marginal effect of the individual-level treatment. We conduct extensive simulations to validate the proposed sample size formulas, and find the empirical power agrees well with the predicted power for each test. Furthermore, the t-approximations often provide better control of type I error rate with a small number of clusters. Finally, we illustrate our sample size formulas to design the motivating suicide prevention factorial trial. The proposed methods are implemented in the R package H2x2Factorial.
Collapse
Affiliation(s)
- Zizhong Tian
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA
| | - Denise Esserman
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA
| | - Guangyu Tong
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA
| | - Ondrej Blaha
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA
| | - James Dziura
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA
| | - Peter Peduzzi
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA
| | - Fan Li
- Department of Biostatistics, Yale University School of Public Health, Connecticut, USA,Yale Center for Analytical Sciences, Yale University, Connecticut, USA,Center for Methods in Implementation and Prevention Science, Yale University, Connecticut, USA,Correspondence Fan Li, PhD, Department of Biostatistics, Yale School of Public Health, New Haven CT, 06510, USA,
| |
Collapse
|
20
|
Tong G, Esserman D, Li F. Accounting for unequal cluster sizes in designing cluster randomized trials to detect treatment effect heterogeneity. Stat Med 2021; 41:1376-1396. [PMID: 34923655 DOI: 10.1002/sim.9283] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 11/14/2021] [Accepted: 11/24/2021] [Indexed: 12/26/2022]
Abstract
Unequal cluster sizes are common in cluster randomized trials (CRTs). While there are a number of previous investigations studying the impact of unequal cluster sizes on the power for testing the average treatment effect in CRTs, little is known about the impact of unequal cluster sizes on the power for testing the heterogeneous treatment effect (HTE) in CRTs. In this work, we expand the sample size procedures for studying HTE in CRTs to accommodate cluster size variation under the linear mixed model framework. Through analytical derivation and graphical exploration, we show that the sample size for the HTE with an individual-level effect modifier is less affected by unequal cluster sizes than with a cluster-level effect modifier. The impact of cluster size variability jointly depends on the mean and coefficient of variation of cluster sizes, covariate intraclass correlation coefficient (ICC) and the conditional outcome ICC. In addition, we demonstrate that the HTE-motivated analysis of covariance framework can be used for analyzing the average treatment effect, and offer a more efficient sample size procedure for studying the average treatment effect adjusting for the effect modifier. We use simulations to confirm the accuracy of the proposed sample size procedures for both the average treatment effect and HTE in CRTs. Extensions to multivariate effect modifiers are provided and our procedure is illustrated in the context of the Strategies to Reduce Injuries and Develop Confidence in Elders trial.
Collapse
Affiliation(s)
- Guangyu Tong
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA.,Yale Center for Analytical Sciences, Yale School of Public Health, New Haven, Connecticut, USA
| | - Denise Esserman
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA.,Yale Center for Analytical Sciences, Yale School of Public Health, New Haven, Connecticut, USA
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, Connecticut, USA.,Yale Center for Analytical Sciences, Yale School of Public Health, New Haven, Connecticut, USA.,Center for Methods in Implementation and Prevention Science, Yale School of Public Health, New Haven, Connecticut, USA
| |
Collapse
|