1
|
De Santis F, Gubbiotti S. On the distribution of the power function for the scale parameter of exponential families. Stat Med 2024; 43:1973-1992. [PMID: 38634314 DOI: 10.1002/sim.10043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 01/17/2024] [Accepted: 02/09/2024] [Indexed: 04/19/2024]
Abstract
The expected value of the standard power function of a test, computed with respect to a design prior distribution, is often used to evaluate the probability of success of an experiment. However, looking only at the expected value might be reductive. Instead, the whole probability distribution of the power function induced by the design prior can be exploited. In this article we consider one-sided testing for the scale parameter of exponential families and we derive general unifying expressions for cumulative distribution and density functions of the random power. Sample size determination criteria based on alternative summaries of these functions are discussed. The study sheds light on the relevance of the choice of the design prior in order to construct a successful experiment.
Collapse
Affiliation(s)
- Fulvio De Santis
- Dipartimento di Scienze Statistiche, Sapienza University of Rome, Rome, Italy
| | - Stefania Gubbiotti
- Dipartimento di Scienze Statistiche, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
2
|
Pöhlmann A, Brunner E, Konietschke F. Sample size planning for rank-based multiple contrast tests. Biom J 2024; 66:e2300240. [PMID: 38637304 DOI: 10.1002/bimj.202300240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/12/2024] [Accepted: 01/29/2024] [Indexed: 04/20/2024]
Abstract
Rank methods are well-established tools for comparing two or multiple (independent) groups. Statistical planning methods for the computing the required sample size(s) to detect a specific alternative with predefined power are lacking. In the present paper, we develop numerical algorithms for sample size planning of pseudo-rank-based multiple contrast tests. We discuss the treatment effects and different ways to approximate variance parameters within the estimation scheme. We further compare pairwise with global rank methods in detail. Extensive simulation studies show that the sample size estimators are accurate. A real data example illustrates the application of the methods.
Collapse
Affiliation(s)
- Anna Pöhlmann
- Institute of Biometry and Clinical Epidemiology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| | - Edgar Brunner
- Department of Medical Statistics, University Medical Center Göttingen, Göttingen, Germany
| | - Frank Konietschke
- Institute of Biometry and Clinical Epidemiology, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
3
|
Qiu SF, Lei J, Poon WY, Tang ML, Wong RS, Tao JR. Sample size determination for interval estimation of the prevalence of a sensitive attribute under non-randomized response models. Br J Math Stat Psychol 2024. [PMID: 38409814 DOI: 10.1111/bmsp.12338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 01/30/2024] [Accepted: 02/01/2024] [Indexed: 02/28/2024]
Abstract
A sufficient number of participants should be included to adequately address the research interest in the surveys with sensitive questions. In this paper, sample size formulas/iterative algorithms are developed from the perspective of controlling the confidence interval width of the prevalence of a sensitive attribute under four non-randomized response models: the crosswise model, parallel model, Poisson item count technique model and negative binomial item count technique model. In contrast to the conventional approach for sample size determination, our sample size formulas/algorithms explicitly incorporate an assurance probability of controlling the width of a confidence interval within the pre-specified range. The performance of the proposed methods is evaluated with respect to the empirical coverage probability, empirical assurance probability and confidence width. Simulation results show that all formulas/algorithms are effective and hence are recommended for practical applications. A real example is used to illustrate the proposed methods.
Collapse
Affiliation(s)
- Shi-Fang Qiu
- Department of Statistics, Chongqing University of Technology, Chongqing, China
| | - Jie Lei
- Department of Statistics, Chongqing University of Technology, Chongqing, China
| | - Wai-Yin Poon
- Department of Statistics, The Chinese University of Hong Kong, Hong Kong, China
| | - Man-Lai Tang
- Centre of Data Innovation Research, Department of Physics, Astronomy & Mathematics, School of Physics, Engineering & Computer Science, University of Hertfordshire, College Lane, Hatfield, UK
| | - Ricky S Wong
- Business School, University of Hertfordshire, Hatfield, UK
| | - Ji-Ran Tao
- Department of Applied Mathematics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
4
|
Pöhlmann A, Konietschke F. Sample size planning for multiple contrast tests. Biom J 2023; 65:e2200081. [PMID: 37667451 DOI: 10.1002/bimj.202200081] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 03/24/2023] [Accepted: 04/07/2023] [Indexed: 09/06/2023]
Abstract
Sample size calculations for two (independent) samples are well established and applied in (pre-)clinical research. When planning several samples, which is common in, for example, preclinical studies, sample size planning tools based on analysis of variance methods are available. Since the underlying effect sizes of these methods are often hard to interpret and to provide for the sample size planning, we employ multiple contrast test procedures for sample size computations in both parametric (under normality assumption) and nonparametric designs using Steel-type tests. Since the exact distributions of the test statistics are unknown under the alternative and variance heterogeneity, we use approximate solutions. Furthermore, since no closed formula for the sample size is available, we use numerical approximations for their computation. Extensive simulation studies are finally conducted to assess the quality of the approximations. It turns out that the methods are accurate in the sense that the multiple contrast test procedures reach the target power to detect the alternative of interest with the sample size computed. The developed procedures are a valuable tool to plan (pre-)clinical trials with several samples and are easily accessible in publicly available software.
Collapse
Affiliation(s)
- Anna Pöhlmann
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Biometry and Clinical Epidemiology, Berlin, Germany
| | - Frank Konietschke
- Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Institute of Biometry and Clinical Epidemiology, Berlin, Germany
| |
Collapse
|
5
|
Benjamini Y, Heller R, Krieger A, Rosset S. Discussion on "Optimal test procedures for multiple hypotheses controlling the familywise expected loss" by Willi Maurer, Frank Bretz, and Xiaolei Xun. Biometrics 2023; 79:2794-2797. [PMID: 38115576 DOI: 10.1111/biom.13906] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 10/13/2022] [Indexed: 12/21/2023]
Abstract
We discuss three issues. In the first part, we discuss the criteria emphasized by Maurer, Bretz, and Xun, warning that it modifies the per comparison error rate that does not address the concerns raised by multiple testing. In the second part, we strengthen the optimality results developed in the paper, based on our recent results. In the third part, we highlight the potentially important role that the use of weights may have in practice and discuss the difficulties in assigning weights that convey the importance in the gain and loss functions, especially as it pertains to multiple endpoints.
Collapse
Affiliation(s)
- Yoav Benjamini
- Department of Statistics and Operations Research, Tel-Aviv University, Israel
| | - Ruth Heller
- Department of Statistics and Operations Research, Tel-Aviv University, Israel
| | - Abba Krieger
- Department of Statistics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Saharon Rosset
- Department of Statistics and Operations Research, Tel-Aviv University, Israel
| |
Collapse
|
6
|
Liu F, Zhao Q, Rodgers AJ, Mehrotra DV. Calculation of Phase 2 dose-finding study sample size for reliable Phase 3 dose selection. Pharm Stat 2023; 22:1076-1088. [PMID: 37550963 DOI: 10.1002/pst.2330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2022] [Revised: 07/09/2023] [Accepted: 07/14/2023] [Indexed: 08/09/2023]
Abstract
Sample sizes of Phase 2 dose-finding studies, usually determined based on a power requirement to detect a significant dose-response relationship, will generally not provide adequate precision for Phase 3 target dose selection. We propose to calculate the sample size of a dose-finding study based on the probability of successfully identifying the target dose within an acceptable range (e.g., 80%-120% of the target) using the multiple comparison and modeling procedure (MCP-Mod). With the proposed approach, different design options for the Phase 2 dose-finding study can also be compared. Due to inherent uncertainty around an assumed true dose-response relationship, sensitivity analyses to assess the robustness of the sample size calculations to deviations from modeling assumptions are recommended. Planning for a hypothetical Phase 2 dose-finding study is used to illustrate the main points. Codes for the proposed approach is available at https://github.com/happysundae/posMCPMod.
Collapse
Affiliation(s)
- Fang Liu
- Biostatistics and Research Decision Sciences, Merck & Co., Inc., Rahway, New Jersey, USA
| | - Qing Zhao
- Biostatistics and Research Decision Sciences, Merck & Co., Inc., Rahway, New Jersey, USA
| | - Anthony J Rodgers
- Biostatistics and Research Decision Sciences, Merck & Co., Inc., Rahway, New Jersey, USA
| | - Devan V Mehrotra
- Biostatistics and Research Decision Sciences, Merck & Co., Inc., Rahway, New Jersey, USA
| |
Collapse
|
7
|
Abstract
A central goal in designing clinical trials is to find the test that maximizes power (or equivalently minimizes required sample size) for finding a false null hypothesis subject to the constraint of type I error. When there is more than one test, such as in clinical trials with multiple endpoints, the issues of optimal design and optimal procedures become more complex. In this paper, we address the question of how such optimal tests should be defined and how they can be found. We review different notions of power and how they relate to study goals, and also consider the requirements of type I error control and the nature of the procedures. This leads us to an explicit optimization problem with objective and constraints that describe its specific desiderata. We present a complete solution for deriving optimal procedures for two hypotheses, which have desired monotonicity properties, and are computationally simple. For some of the optimization formulations this yields optimal procedures that are identical to existing procedures, such as Hommel's procedure or the procedure of Bittman et al. (2009), while for other cases it yields completely novel and more powerful procedures than existing ones. We demonstrate the nature of our novel procedures and their improved power extensively in a simulation and on the APEX study (Cohen et al., 2016).
Collapse
Affiliation(s)
- Ruth Heller
- Department of Statistics and Operations Research, Tel-Aviv University, Tel Aviv, Israel
| | - Abba Krieger
- Department of Statistics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Saharon Rosset
- Department of Statistics and Operations Research, Tel-Aviv University, Tel Aviv, Israel
| |
Collapse
|
8
|
Tishkovskaya SV, Sutton CJ, Thomas LH, Watkins CL. Determining the sample size for a cluster-randomised trial using knowledge elicitation: Bayesian hierarchical modelling of the intracluster correlation coefficient. Clin Trials 2023; 20:293-306. [PMID: 37036110 PMCID: PMC10262340 DOI: 10.1177/17407745231164569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2023]
Abstract
BACKGROUND The intracluster correlation coefficient is a key input parameter for sample size determination in cluster-randomised trials. Sample size is very sensitive to small differences in the intracluster correlation coefficient, so it is vital to have a robust intracluster correlation coefficient estimate. This is often problematic because either a relevant intracluster correlation coefficient estimate is not available or the available estimate is imprecise due to being based on small-scale studies with low numbers of clusters. Misspecification may lead to an underpowered or inefficiently large and potentially unethical trial. METHODS We apply a Bayesian approach to produce an intracluster correlation coefficient estimate and hence propose sample size for a planned cluster-randomised trial of the effectiveness of a systematic voiding programme for post-stroke incontinence. A Bayesian hierarchical model is used to combine intracluster correlation coefficient estimates from other relevant trials making use of the wealth of intracluster correlation coefficient information available in published research. We employ knowledge elicitation process to assess the relevance of each intracluster correlation coefficient estimate to the planned trial setting. The team of expert reviewers assigned relevance weights to each study, and each outcome within the study, hence informing parameters of Bayesian modelling. To measure the performance of experts, agreement and reliability methods were applied. RESULTS The 34 intracluster correlation coefficient estimates extracted from 16 previously published trials were combined in the Bayesian hierarchical model using aggregated relevance weights elicited from the experts. The intracluster correlation coefficients available from external sources were used to construct a posterior distribution of the targeted intracluster correlation coefficient which was summarised as a posterior median with a 95% credible interval informing researchers about the range of plausible sample size values. The estimated intracluster correlation coefficient determined a sample size of between 450 (25 clusters) and 480 (20 clusters), compared to 500-600 from a classical approach. The use of quantiles, and other parameters, from the estimated posterior distribution is illustrated and the impact on sample size described. CONCLUSION Accounting for uncertainty in an unknown intracluster correlation coefficient, trials can be designed with a more robust sample size. The approach presented provides the possibility of incorporating intracluster correlation coefficients from various cluster-randomised trial settings which can differ from the planned study, with the difference being accounted for in the modelling. By using expert knowledge to elicit relevance weights and synthesising the externally available intracluster correlation coefficient estimates, information is used more efficiently than in a classical approach, where the intracluster correlation coefficient estimates tend to be less robust and overly conservative. The intracluster correlation coefficient estimate constructed is likely to produce a smaller sample size on average than the conventional strategy of choosing a conservative intracluster correlation coefficient estimate. This may therefore result in substantial time and resources savings.
Collapse
Affiliation(s)
- Svetlana V Tishkovskaya
- Lancashire Clinical Trials Unit, Faculty of Health and Care, University of Central Lancashire, Preston, UK
| | - Chris J Sutton
- Centre for Biostatistics, Division of Population Health, Health Services Research & Primary Care, School of Health Sciences, Faculty of Biology, Medicine and Health, The University of Manchester, Manchester, UK
| | - Lois H Thomas
- Faculty of Allied Health and Wellbeing, University of Central Lancashire, Preston, UK
| | - Caroline L Watkins
- Lancashire Clinical Trials Unit, Faculty of Health and Care, University of Central Lancashire, Preston, UK
| |
Collapse
|
9
|
Qiu SF, Tang ML, Tao JR, Wong RS. Sample Size Determination for Interval Estimation of the Prevalence of a Sensitive Attribute Under Randomized Response Models. Psychometrika 2022; 87:1361-1389. [PMID: 35306631 PMCID: PMC9636124 DOI: 10.1007/s11336-022-09854-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 01/23/2022] [Accepted: 02/02/2022] [Indexed: 06/14/2023]
Abstract
Studies with sensitive questions should include a sufficient number of respondents to adequately address the research interest. While studies with an inadequate number of respondents may not yield significant conclusions, studies with an excess of respondents become wasteful of investigators' budget. Therefore, it is an important step in survey sampling to determine the required number of participants. In this article, we derive sample size formulas based on confidence interval estimation of prevalence for four randomized response models, namely, the Warner's randomized response model, unrelated question model, item count technique model and cheater detection model. Specifically, our sample size formulas control, with a given assurance probability, the width of a confidence interval within the planned range. Simulation results demonstrate that all formulas are accurate in terms of empirical coverage probabilities and empirical assurance probabilities. All formulas are illustrated using a real-life application about the use of unethical tactics in negotiation.
Collapse
Affiliation(s)
- Shi-Fang Qiu
- Department of Statistics, Chongqing University of Technology, Chongqing, 400054 China
| | - Man-Lai Tang
- Department of Mathematics, Brunel University London, Uxbridge, UB8 3PH UK
| | - Ji-Ran Tao
- School of Mathematics and Statistics, Beijing Institute of Technology, Beijing, 100081 China
| | - Ricky S. Wong
- Business School, University of Hertfordshire, Hatfield, Hertfordshire, AL10 9EU UK
| |
Collapse
|
10
|
Yang S, Moerbeek M, Taljaard M, Li F. Power analysis for cluster randomized trials with continuous co-primary endpoints. Biometrics 2022. [PMID: 35531926 DOI: 10.1111/biom.13692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 04/29/2022] [Indexed: 11/29/2022]
Abstract
Pragmatic trials evaluating health care interventions often adopt cluster randomization due to scientific or logistical considerations. Systematic reviews have shown that co-primary endpoints are not uncommon in pragmatic trials but are seldom recognized in sample size or power calculations. While methods for power analysis based on K (K≥ 2) binary co-primary endpoints are available for cluster randomized trials (CRTs), to our knowledge, methods for continuous co-primary endpoints are not yet available. Assuming a multivariate linear mixed model that accounts for multiple types of intraclass correlation coefficients among the observations in each cluster, we derive the closed-form joint distribution of K treatment effect estimators to facilitate sample size and power determination with different types of null hypotheses under equal cluster sizes. We characterize the relationship between the power of each test and different types of correlation parameters. We further relax the equal cluster size assumption and approximate the joint distribution of the K treatment effect estimators through the mean and coefficient of variation of cluster sizes. Our simulation studies with a finite number of clusters indicate that the predicted power by our method agrees well with the empirical power, when the parameters in the multivariate linear mixed model are estimated via the expectation-maximization algorithm. An application to a real CRT is presented to illustrate the proposed method. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Siyun Yang
- Department of Biostatistics and Bioinformatics, Duke University School of Medicine, Durham, NC, USA
| | - Mirjam Moerbeek
- Department of Methodology and Statistics, Utrecht University, Utrecht, The Netherlands
| | - Monica Taljaard
- Clinical Epidemiology Program, Ottawa Hospital Research Institute, Ottawa, ON, Canada.,School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada
| | - Fan Li
- Department of Biostatistics, Yale School of Public Health, New Haven, USA.,Center for Methods in Implementation and Prevention Science, Yale University, New Haven, USA
| |
Collapse
|
11
|
De Santis F, Gubbiotti S. Borrowing historical information for non-inferiority trials on Covid-19 vaccines. Int J Biostat 2022:ijb-2021-0120. [PMID: 35472295 DOI: 10.1515/ijb-2021-0120] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Accepted: 03/28/2022] [Indexed: 11/15/2022]
Abstract
Non-inferiority vaccine trials compare new candidates to active controls that provide clinically significant protection against a disease. Bayesian statistics allows to exploit pre-experimental information available from previous studies to increase precision and reduce costs. Here, historical knowledge is incorporated into the analysis through a power prior that dynamically regulates the degree of information-borrowing. We examine non-inferiority tests based on credible intervals for the unknown effects-difference between two vaccines on the log odds ratio scale, with an application to new Covid-19 vaccines. We explore the frequentist properties of the method and we address the sample size determination problem.
Collapse
Affiliation(s)
- Fulvio De Santis
- Dipartimento di Scienze Statistiche, Sapienza University of Rome, Roma, Italy
| | - Stefania Gubbiotti
- Dipartimento di Scienze Statistiche, Sapienza University of Rome, Roma, Italy
| |
Collapse
|
12
|
Tang N, Yu B. Bayesian sample size determination in a three-arm non-inferiority trial with binary endpoints. J Biopharm Stat 2022; 32:768-788. [PMID: 35213275 DOI: 10.1080/10543406.2022.2030748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
A three-arm non-inferiority trial including a test treatment, a reference treatment, and a placebo is recommended to assess the assay sensitivity and internal validity of a trial when applicable. Existing methods for designing and analyzing three-arm trials with binary endpoints are mainly developed from a frequentist viewpoint. However, these methods largely depend on large sample theories. To alleviate this problem, we propose two fully Bayesian approaches, the posterior variance approach and Bayes factor approach, to determine sample size required in a three-arm non-inferiority trial with binary endpoints. Simulation studies are conducted to investigate the performance of the proposed Bayesian methods. An example is illustrated by the proposed methodologies. Bayes factor method always leads to smaller sample sizes than the posterior variance method, utilizing the historical data can reduce the required sample size, simultaneous test requires more sample size to achieve the desired power than the non-inferiority test, the selection of the hyperparameters has a relatively large effect on the required sample size. When only controlling the posterior variance, the posterior variance criterion is a simple and effective option for obtaining a rough outcome. When conducting a previous clinical trial, it is recommended to use the Bayes factor criterion in practical applications.
Collapse
Affiliation(s)
- Niansheng Tang
- Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming, P. R. China
| | - Bin Yu
- Yunnan Key Laboratory of Statistical Modeling and Data Analysis, Yunnan University, Kunming, P. R. China
| |
Collapse
|
13
|
Zhang L, Gilbert PB, Capparelli E, Huang Y. Simulation-Based Pharmacokinetics Sampling Design for Evaluating Correlates of Prevention Efficacy of Passive HIV Monoclonal Antibody Prophylaxis. Stat Biopharm Res 2022; 14:611-625. [PMID: 36684526 PMCID: PMC9856202 DOI: 10.1080/19466315.2021.1919196] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
We address sampling design of population pharmacokinetics (popPK) experiments in the context of two ongoing phase 2b efficacy trials that evaluate the efficacy of VRC01 (vs. placebo) in reducing the rate of HIV infection among 4625 participants. Blood samples are collected at up to 22 study visits from all participants for immediate HIV diagnosis as the primary trial outcome, and stored for future outcome-dependent marker measurements. A key secondary objective of the trials is to evaluate correlates of prevention efficacy among a sub-cohort of VRC01 recipients in terms of whether the current value of VRC01 serum concentration is associated with the instantaneous rate of HIV infection. To accomplish this, concentrations on a daily grid are estimated via non-linear mixed effects popPK modeling of observed 4-weekly concentrations. Given the impracticality of measuring concentrations in all stored blood samples, we devised a simulation-based sampling design framework to evaluate the impact of sub-cohort sample sizes (m) and sampling schemes of time-points on the accuracy and precision of the popPK model parameters. We accounted for specific study schedules and heterogeneity in participants' characteristics and study adherence patterns. We found that with m = 120, reasonably unbiased and consistent estimates of most fixed and random effect terms could be obtained without complete sampling of all 22 time-points, even under low study adherence (about half of the 4-weekly visits missing per participant). The described simulation framework is not only novel in its application to popPK sampling design for studying correlates of prevention efficacy in a subcohort of the parent trial, but also flexible in accommodating real-life study setup options, and can be generalized to other single- or multiple-dose PK sampling design settings.
Collapse
Affiliation(s)
- Lily Zhang
- Vaccine and Infectious Disease Division, Fred Hutchinson Research Center, Seattle, USA
| | - Peter B. Gilbert
- Vaccine and Infectious Disease Division, Fred Hutchinson Research Center, Seattle, USA,Department of Biostatistics, University of Washington, Seattle, USA
| | | | - Yunda Huang
- Vaccine and Infectious Disease Division, Fred Hutchinson Research Center, Seattle, USA,Department of Global Health, University of Washington, Seattle, USA,Corresponding author: Yunda Huang, Ph.D. Fred Hutchinson Cancer Research Center, 1100 Fairview Ave. N, Seattle, WA 98109, USA. Tel: 001-206-667-5780
| |
Collapse
|
14
|
De Santis F, Gubbiotti S. Joint control of consensus and evidence in Bayesian design of clinical trials. Biom J 2021; 64:681-695. [PMID: 34889467 DOI: 10.1002/bimj.202100035] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 08/04/2021] [Accepted: 09/21/2021] [Indexed: 11/11/2022]
Abstract
In Bayesian inference, prior distributions formalize preexperimental information and uncertainty on model parameters. Sometimes different sources of knowledge are available, possibly leading to divergent posterior distributions and inferences. Research has been recently devoted to the development of sample size criteria that guarantee agreement of posterior information in terms of credible intervals when multiple priors are available. In these articles, the goals of reaching consensus and evidence are typically kept separated. Adopting a Bayesian performance-based approach, the present article proposes new sample size criteria for superiority trials that jointly control the achievement of both minimal evidence and consensus, measured by appropriate functions of the posterior distributions. We develop both an average criterion and a more stringent criterion that accounts for the entire predictive distributions of the selected measures of minimal evidence and consensus. Methods are developed and illustrated via simulation for trials involving binary outcomes. A real clinical trial example on Covid-19 vaccine data is presented.
Collapse
Affiliation(s)
- Fulvio De Santis
- Department of Statistics, Sapienza University of Rome, Rome, Italy
| | | |
Collapse
|
15
|
Andersen LAC, Palstrøm NB, Diederichsen A, Lindholt JS, Rasmussen LM, Beck HC. Determining Plasma Protein Variation Parameters as a Prerequisite for Biomarker Studies-A TMT-Based LC-MSMS Proteome Investigation. Proteomes 2021; 9:proteomes9040047. [PMID: 34941812 PMCID: PMC8707687 DOI: 10.3390/proteomes9040047] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/24/2021] [Accepted: 11/26/2021] [Indexed: 12/03/2022] Open
Abstract
Specific plasma proteins serve as valuable markers for various diseases and are in many cases routinely measured in clinical laboratories by fully automated systems. For safe diagnostics and monitoring using these markers, it is important to ensure an analytical quality in line with clinical needs. For this purpose, information on the analytical and the biological variation of the measured plasma protein, also in the context of the discovery and validation of novel, disease protein biomarkers, is important, particularly in relation to for sample size calculations in clinical studies. Nevertheless, information on the biological variation of the majority of medium-to-high abundant plasma proteins is largely absent. In this study, we hypothesized that it is possible to generate data on inter-individual biological variation in combination with analytical variation of several hundred abundant plasma proteins, by applying LC-MS/MS in combination with relative quantification using isobaric tagging (10-plex TMT-labeling) to plasma samples. Using this analytical proteomic approach, we analyzed 42 plasma samples prepared in doublets, and estimated the technical, inter-individual biological, and total variation of 265 of the most abundant proteins present in human plasma thereby creating the prerequisites for power analysis and sample size determination in future clinical proteomics studies. Our results demonstrated that only five samples per group may provide sufficient statistical power for most of the analyzed proteins if relative changes in abundances >1.5-fold are expected. Seventeen of the measured proteins are present in the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Biological Variation Database, and demonstrated remarkably similar biological CV’s to the corresponding CV’s listed in the EFLM database suggesting that the generated proteomic determined variation knowledge is useful for large-scale determination of plasma protein variations.
Collapse
Affiliation(s)
| | - Nicolai Bjødstrup Palstrøm
- Department of Clinical Biochemistry and Pharmacology, Odense University Hospital, DK-5000 Odense, Denmark; (N.B.P.); (L.M.R.)
- Center for Clinical Proteomics (CCP), Odense University Hospital, DK-5000 Odense, Denmark
| | - Axel Diederichsen
- Center for Individualized Medicine in Arterial Diseases (CIMA), Odense University Hospital, DK-5000 Odense, Denmark; (A.D.); (J.S.L.)
- Department of Cardiology, Odense University Hospital, DK-5000 Odense, Denmark
| | - Jes Sanddal Lindholt
- Center for Individualized Medicine in Arterial Diseases (CIMA), Odense University Hospital, DK-5000 Odense, Denmark; (A.D.); (J.S.L.)
- Department of Cardiothoracic and Vascular Surgery, Odense University Hospital, DK-5000 Odense, Denmark
| | - Lars Melholt Rasmussen
- Department of Clinical Biochemistry and Pharmacology, Odense University Hospital, DK-5000 Odense, Denmark; (N.B.P.); (L.M.R.)
- Center for Clinical Proteomics (CCP), Odense University Hospital, DK-5000 Odense, Denmark
- Center for Individualized Medicine in Arterial Diseases (CIMA), Odense University Hospital, DK-5000 Odense, Denmark; (A.D.); (J.S.L.)
| | - Hans Christian Beck
- Department of Clinical Biochemistry and Pharmacology, Odense University Hospital, DK-5000 Odense, Denmark; (N.B.P.); (L.M.R.)
- Center for Clinical Proteomics (CCP), Odense University Hospital, DK-5000 Odense, Denmark
- Center for Individualized Medicine in Arterial Diseases (CIMA), Odense University Hospital, DK-5000 Odense, Denmark; (A.D.); (J.S.L.)
- Correspondence: ; Tel.: +45-29-647-470
| |
Collapse
|
16
|
Kelcey B, Xie Y, Spybrook J, Dong N. Power and Sample Size Determination for Multilevel Mediation in Three-Level Cluster-Randomized Trials. Multivariate Behav Res 2021; 56:496-513. [PMID: 32293929 DOI: 10.1080/00273171.2020.1738910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Mediation analyses supply a principal lens to probe the pathways through which a treatment acts upon an outcome because they can dismantle and test the core components of treatments and test how these components function as a coordinated system or theory of action. Experimental evaluation of mediation effects in addition to total effects has become increasingly common but literature has developed only limited guidance on how to plan mediation studies with multi-tiered hierarchical or clustered structures. In this study, we provide methods for computing the power to detect mediation effects in three-level cluster-randomized designs that examine individual- (level one), intermediate- (level two) or cluster-level (level three) mediators. We assess the methods using a simulation and provide examples of a three-level clinic-randomized study (individuals nested within therapists nested within clinics) probing an individual-, intermediate- or cluster-level mediator using the R package PowerUpR and its Shiny application.
Collapse
Affiliation(s)
- Ben Kelcey
- College of Education, Criminal Justice, Human Services and Information Technology, University of Cincinnati
| | - Yanli Xie
- College of Education, Criminal Justice, Human Services and Information Technology, University of Cincinnati
| | - Jessaca Spybrook
- College of Education, Criminal Justice, Human Services and Information Technology, Western Michigan University
| | - Nianbo Dong
- College of Education, University of North Carolina Chapel Hill
| |
Collapse
|
17
|
De Santis F, Gubbiotti S. Sample Size Requirements for Calibrated Approximate Credible Intervals for Proportions in Clinical Trials. Int J Environ Res Public Health 2021; 18:ijerph18020595. [PMID: 33445651 PMCID: PMC7827664 DOI: 10.3390/ijerph18020595] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 01/04/2021] [Accepted: 01/08/2021] [Indexed: 11/18/2022]
Abstract
In Bayesian analysis of clinical trials data, credible intervals are widely used for inference on unknown parameters of interest, such as treatment effects or differences in treatments effects. Highest Posterior Density (HPD) sets are often used because they guarantee the shortest length. In most of standard problems, closed-form expressions for exact HPD intervals do not exist, but they are available for intervals based on the normal approximation of the posterior distribution. For small sample sizes, approximate intervals may be not calibrated in terms of posterior probability, but for increasing sample sizes their posterior probability tends to the correct credible level and they become closer and closer to exact sets. The article proposes a predictive analysis to select appropriate sample sizes needed to have approximate intervals calibrated at a pre-specified level. Examples are given for interval estimation of proportions and log-odds.
Collapse
|
18
|
Abstract
Bayesian statistics has been widely utilized as an approach that can incorporate prior knowledge into statistical inference. Tolerance intervals (TI) are the most commonly used statistical methods for product quality assurance. There are two main Bayesian approaches for calculating statistical tolerance intervals: Hamada and Wolfinger. A simulation-based approach was implemented to compare two-sided Wolfinger, Hamada, and frequentist tolerance intervals which control the probability content at a specified level of confidence. As sample sizes increase, compared to frequentist, Hamada TI become more conservative while Wolfinger TI are more liberal. To address this issue, we propose an empirical weighted Bayesian TI approach that is a compromise between Hamada and Wolfinger approaches. The proposed Bayesian TI result in narrower limits in certain scenarios while ensuring the confidence content coverage remains comparable to frequentist.
Collapse
Affiliation(s)
- Hong Tran
- Product Quality Management, Janssen Pharmaceuticals, Titusville, New Jersey, USA
| |
Collapse
|
19
|
Chiang C, Chen CT, Hsiao CF. Use of a two-sided tolerance interval in the design and evaluation of biosimilarity in clinical studies. Pharm Stat 2020; 20:175-184. [PMID: 32869921 DOI: 10.1002/pst.2065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2019] [Revised: 06/30/2020] [Accepted: 08/11/2020] [Indexed: 11/06/2022]
Abstract
In assessing biosimilarity between two products, the question to ask is always "How similar is similar?" Traditionally, the equivalence of the means between products is the primary consideration in a clinical trial. This study suggests an alternative assessment for testing a certain percentage of the population of differences lying within a prespecified interval. In doing so, the accuracy and precision are assessed simultaneously by judging whether a two-sided tolerance interval falls within a prespecified acceptance range. We further derive an asymptotic distribution of the tolerance limits to determine the sample size for achieving a targeted level of power. Our numerical study shows that the proposed two-sided tolerance interval test controls the type I error rate and provides sufficient power. A real example is presented to illustrate our proposed approach.
Collapse
Affiliation(s)
- Chieh Chiang
- Institute of Population Health Sciences, National Health Research Institutes, Zhunan, Taiwan
| | | | - Chin-Fu Hsiao
- Institute of Population Health Sciences, National Health Research Institutes, Zhunan, Taiwan
| |
Collapse
|
20
|
Abstract
This article investigates the homogeneity testing problem of binomial proportions for stratified partially validated data obtained by double-sampling method with two fallible classifiers. Several test procedures, including the weighted-least-squares test with/without log-transformation, logit-transformation and double log-transformation, and likelihood ratio test and score test, are developed to test the homogeneity under two models, distinguished by conditional independence assumption of two classifiers. Simulation results show that score test performs better than other tests in the sense that the empirical size is generally controlled around the nominal level, and hence be recommended to practical applications. Other tests also perform well when both binomial proportions and sample sizes are not small. Approximate sample sizes based on score test, likelihood ratio test and the weighted-least-squares test with double log-transformation are generally accurate in terms of the empirical power and type I error rate with the estimated sample sizes, and hence be recommended. An example from the malaria study is illustrated by the proposed methodologies.
Collapse
Affiliation(s)
- Shi-Fang Qiu
- Department of Statistics, Chongqing University of Technology, Chongqing, China
| | - Qi-Xiang Fu
- Department of Statistics, Chongqing University of Technology, Chongqing, China
| |
Collapse
|
21
|
Feißt M, Krisam J, Kieser M. Incorporating historical two-arm data in clinical trials with binary outcome: A practical approach. Pharm Stat 2020; 19:662-678. [PMID: 32227680 DOI: 10.1002/pst.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2018] [Revised: 03/03/2020] [Accepted: 03/18/2020] [Indexed: 12/24/2022]
Abstract
The feasibility of a new clinical trial may be increased by incorporating historical data of previous trials. In the particular case where only data from a single historical trial are available, there exists no clear recommendation in the literature regarding the most favorable approach. A main problem of the incorporation of historical data is the possible inflation of the type I error rate. A way to control this type of error is the so-called power prior approach. This Bayesian method does not "borrow" the full historical information but uses a parameter 0 ≤ δ ≤ 1 to determine the amount of borrowed data. Based on the methodology of the power prior, we propose a frequentist framework that allows incorporation of historical data from both arms of two-armed trials with binary outcome, while simultaneously controlling the type I error rate. It is shown that for any specific trial scenario a value δ > 0 can be determined such that the type I error rate falls below the prespecified significance level. The magnitude of this value of δ depends on the characteristics of the data observed in the historical trial. Conditionally on these characteristics, an increase in power as compared to a trial without borrowing may result. Similarly, we propose methods how the required sample size can be reduced. The results are discussed and compared to those obtained in a Bayesian framework. Application is illustrated by a clinical trial example.
Collapse
Affiliation(s)
- Manuel Feißt
- Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany
| | - Johannes Krisam
- Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany
| | - Meinhard Kieser
- Institute of Medical Biometry and Informatics, University of Heidelberg, Heidelberg, Germany
| |
Collapse
|
22
|
Pagel PS, Lazicki TJ, Izquierdo DA, Boettcher BT, Tawil JN, Freed JK. Characteristics associated with Publication of Randomized Controlled Trials in the Journal of Cardiothoracic and Vascular Anesthesia: A 15-Year Analysis, 2004-2018. J Cardiothorac Vasc Anesth 2019; 34:857-864. [PMID: 31836407 DOI: 10.1053/j.jvca.2019.11.025] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 11/11/2019] [Accepted: 11/18/2019] [Indexed: 11/11/2022]
Abstract
Randomized controlled trials (RCTs) provide important data to guide clinical decisions. Publication bias may limit the applicability of RCTs because many clinical investigators prefer to submit and journals more selectively accept studies with positive results. The authors tested the hypothesis that positive RCTs published in the Journal of Cardiothoracic and Vascular Anesthesia were more likely to be associated with factors known to predict publication of positive versus negative RCTs in other journals. This observational study was an internet analysis of all issues of Journal of Cardiothoracic and Vascular Anesthesia from 2004-2018. Each issue was searched to identify human RCTs. The numbers of centers and enrolled patients in each RCT were tabulated. The corresponding author determined the country of origin (United States v international). A trial was "positive" or "negative" based on rejection or confirmation of the null hypothesis, respectively, for the primary outcome variable or the majority of measured outcomes if a primary outcome was not identified. The presence or absence of a hypothesis, randomization methodology, sample size calculation, and blinded research design was recorded. Registration in a public database, Consolidated Statements of Reporting Trials (CONSORT) guideline compliance, and the source of funding also were determined. The number of citations for each RCT was determined by using Google Scholar; the citation rate was calculated as the ratio of the number of total citations and the duration in years since the trial's original publication. A total of 296 RCTs were identified, of which 58.8% reported positive results. Most RCTs were single center, relatively small, and international in origin. Total citations/RCT decreased over time, but citations/year did not. The percentage of RCTs that identified a randomization method, were registered, or followed CONSORT guidelines increased in a time-dependent manner. No differences in any factors associated with publication of RCTs were observed when positive and negative trials were compared. The Journal of Cardiothoracic and Vascular Anesthesia publishes more positive than negative RCTs, but factors that have been previously associated with RCT publication in other journals were similar between groups.
Collapse
Affiliation(s)
- Paul S Pagel
- Anesthesia Service, Clement J. Zablocki Veterans Affairs Medical Center, Milwaukee, WI.
| | - Timothy J Lazicki
- Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, WI
| | - David A Izquierdo
- Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, WI
| | - Brent T Boettcher
- Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, WI
| | - Justin N Tawil
- Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, WI
| | - Julie K Freed
- Department of Anesthesiology, Medical College of Wisconsin, Milwaukee, WI
| |
Collapse
|
23
|
Nagase M, Ueda S, Higashimori M, Ichikawa K, Dunyak J, Al-Huniti N. Optimal designs for regional bridging studies using the Bayesian power prior method. Pharm Stat 2019; 19:22-30. [PMID: 31448511 DOI: 10.1002/pst.1967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2018] [Revised: 05/08/2019] [Accepted: 07/01/2019] [Indexed: 11/09/2022]
Abstract
As described in the ICH E5 guidelines, a bridging study is an additional study executed in a new geographical region or subpopulation to link or "build a bridge" from global clinical trial outcomes to the new region. The regulatory and scientific goals of a bridging study is to evaluate potential subpopulation differences while minimizing duplication of studies and meeting unmet medical needs expeditiously. Use of historical data (borrowing) from global studies is an attractive approach to meet these conflicting goals. Here, we propose a practical and relevant approach to guide the optimal borrowing rate (percent of subjects in earlier studies) and the number of subjects in the new regional bridging study. We address the limitations in global/regional exchangeability through use of a Bayesian power prior method and then optimize bridging study design with a return on investment viewpoint. The method is demonstrated using clinical data from global and Japanese trials in dapagliflozin for type 2 diabetes.
Collapse
Affiliation(s)
- Mario Nagase
- Clinical Pharmacology & Safety Sciences, R&D, AstraZeneca, Boston, USA
| | - Shinya Ueda
- Quantitative Clinical Pharmacology, Global Medicines Development, AstraZeneca K.K., Osaka, Japan
| | - Mitsuo Higashimori
- Quantitative Clinical Pharmacology, Global Medicines Development, AstraZeneca K.K., Osaka, Japan
| | - Katsuomi Ichikawa
- Quantitative Clinical Pharmacology, Global Medicines Development, AstraZeneca K.K., Osaka, Japan
| | - James Dunyak
- Clinical Pharmacology & Safety Sciences, R&D, AstraZeneca, Boston, USA
| | - Nidal Al-Huniti
- Clinical Pharmacology & Safety Sciences, R&D, AstraZeneca, Boston, USA
| |
Collapse
|
24
|
Ring A, Lang B, Kazaroho C, Labes D, Schall R, Schütz H. Sample size determination in bioequivalence studies using statistical assurance. Br J Clin Pharmacol 2019; 85:2369-2377. [PMID: 31276603 DOI: 10.1111/bcp.14055] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 05/28/2019] [Accepted: 06/03/2019] [Indexed: 12/20/2022] Open
Abstract
AIMS Bioequivalence (BE) trials aim to demonstrate that the 90% confidence interval of the T/R-ratio of the pharmacokinetic metrics between two formulations (test [T] and reference [R]) of a drug is fully included in the acceptance interval [0.80, 1.25]. Traditionally, the sample size of BE trials is based on a power calculation based on the intrasubject variability coefficient of variation (CV) and the T/R-ratio of the metrics. Since the exact value of the T/R-ratio is not known prior to the trial, it is often assumed that the difference between the treatments does not exceed 5%. Hence, uncertainty about the T/R-ratio is expressed by using a fixed value for the sample size calculation. We propose to characterise the uncertainty about the T/R-ratio by a (normal) distribution for the log(T/R-ratio), with an assumed mean of log θ = 0.00 (i.e. θ = 1.00) and a standard deviation σu , which quantifies the uncertainty. Evaluating this distribution leads to the statistical assurance of the BE trial. METHODS The assurance of a clinical trial can be derived by integrating the power over the distribution of the input parameters, in this case, the assumed distribution of the log(T/R)-ratio. Because it is an average power, the assurance can be interpreted as a measure of the probability of success that does not depend on a specific assumed value for the log(T/R)-ratio. The relationship between power and assurance will be analysed by comparing the numerical outcomes. RESULTS Using the assurance concept, values of the standard deviation for the distribution of potential log(T/R)-ratios can be chosen to reflect the magnitude of uncertainty. For most practical cases (i.e. when 0.95 ≤ θ ≤ 1.05), the sample size is not, or only slightly, changed when σ = |log(θ)|. CONCLUSION The advantage of deriving the assurance for BE trials is that uncertainty is directly expressed as a parameter of variability.
Collapse
Affiliation(s)
- A Ring
- University of the Free State, Bloemfontein, South Africa.,medac, Wedel, Germany
| | - B Lang
- Boehringer Ingelheim, Biberach, Germany
| | | | | | - R Schall
- University of the Free State, Bloemfontein, South Africa.,IQVIA Biostatistics, Bloemfontein, South Africa
| | | |
Collapse
|
25
|
Joseph L, Bélisle P. Bayesian consensus-based sample size criteria for binomial proportions. Stat Med 2019; 38:4566-4573. [PMID: 31297825 DOI: 10.1002/sim.8316] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Revised: 04/26/2019] [Accepted: 06/14/2019] [Indexed: 11/06/2022]
Abstract
Many sample size criteria exist. These include power calculations and methods based on confidence interval widths from a frequentist viewpoint, and Bayesian methods based on credible interval widths or decision theory. Bayesian methods account for the inherent uncertainty of inputs to sample size calculations through the use of prior information rather than the point estimates typically used by frequentist methods. However, the choice of prior density can be problematic because there will almost always be different appreciations of the past evidence. Such differences can be accommodated a priori by robust methods for Bayesian design, for example, using mixtures or ϵ-contaminated priors. This would then ensure that the prior class includes divergent opinions. However, one may prefer to report several posterior densities arising from a "community of priors," which cover the range of plausible prior densities, rather than forming a single class of priors. To date, however, there are no corresponding sample size methods that specifically account for a community of prior densities in the sense of ensuring a large-enough sample size for the data to sufficiently overwhelm the priors to ensure consensus across widely divergent prior views. In this paper, we develop methods that account for the variability in prior opinions by providing the sample size required to induce posterior agreement to a prespecified degree. Prototypic examples to one- and two-sample binomial outcomes are included. We compare sample sizes from criteria that consider a family of priors to those that would result from previous interval-based Bayesian criteria.
Collapse
Affiliation(s)
- Lawrence Joseph
- Department of Epidemiology, Biostatistics and Occupational Health, McGill University, Montreal, QC, Canada
| | - Patrick Bélisle
- Division of Clinical Epidemiology, McGill University Health Centre, Montreal, QC, Canada
| |
Collapse
|
26
|
Psioda MA, Ibrahim JG. Bayesian design of a survival trial with a cured fraction using historical data. Stat Med 2018; 37:3814-3831. [PMID: 29938817 DOI: 10.1002/sim.7846] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2016] [Revised: 02/14/2018] [Accepted: 05/12/2018] [Indexed: 11/06/2022]
Abstract
In this paper, we develop a general Bayesian clinical trial design methodology, tailored for time-to-event trials with a cured fraction in scenarios where a previously completed clinical trial is available to inform the design and analysis of the new trial. Our methodology provides a conceptually appealing and computationally feasible framework that allows one to construct a fixed, maximally informative prior a priori while simultaneously identifying the minimum sample size required for the new trial so that the design has high power and reasonable type I error control from a Bayesian perspective. This strategy is particularly well suited for scenarios where adaptive borrowing approaches are not practical due to the nature of the trial, complexity of the model, or the source of the prior information. Control of a Bayesian type I error rate offers a sensible balance between wanting to use high-quality information in the design and analysis of future trials while still controlling type I errors in an equitable way. Moreover, sample size determination based on our Bayesian view of power can lead to a more adequately sized trial by virtue of taking into account all the uncertainty in the treatment effect. We demonstrate our methodology by designing a cancer clinical trial in high-risk melanoma.
Collapse
Affiliation(s)
- Matthew A Psioda
- Department of Biostatistics, University of North Carolina, Chapel Hill, 27599, North Carolina, USA
| | - Joseph G Ibrahim
- Department of Biostatistics, University of North Carolina, Chapel Hill, 27599, North Carolina, USA
| |
Collapse
|
27
|
Qiu SF, Zeng XS, Tang ML, Poon WY. Test procedure and sample size determination for a proportion study using a double-sampling scheme with two fallible classifiers. Stat Methods Med Res 2017; 28:1019-1043. [PMID: 29233082 DOI: 10.1177/0962280217744239] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Double sampling is usually applied to collect necessary information for situations in which an infallible classifier is available for validating a subset of the sample that has already been classified by a fallible classifier. Inference procedures have previously been developed based on the partially validated data obtained by the double-sampling process. However, it could happen in practice that such infallible classifier or gold standard does not exist. In this article, we consider the case in which both classifiers are fallible and propose asymptotic and approximate unconditional test procedures based on six test statistics for a population proportion and five approximate sample size formulas based on the recommended test procedures under two models. Our results suggest that both asymptotic and approximate unconditional procedures based on the score statistic perform satisfactorily for small to large sample sizes and are highly recommended. When sample size is moderate or large, asymptotic procedures based on the Wald statistic with the variance being estimated under the null hypothesis, likelihood rate statistic, log- and logit-transformation statistics based on both models generally perform well and are hence recommended. The approximate unconditional procedures based on the log-transformation statistic under Model I, Wald statistic with the variance being estimated under the null hypothesis, log- and logit-transformation statistics under Model II are recommended when sample size is small. In general, sample size formulae based on the Wald statistic with the variance being estimated under the null hypothesis, likelihood rate statistic and score statistic are recommended in practical applications. The applicability of the proposed methods is illustrated by a real-data example.
Collapse
Affiliation(s)
- Shi-Fang Qiu
- 1 Department of Statistics, Chongqing University of Technology, Chongqing, China
| | - Xiao-Song Zeng
- 1 Department of Statistics, Chongqing University of Technology, Chongqing, China
| | - Man-Lai Tang
- 2 Department of Mathematics and Statistics, Hang Seng Management College, Hong Kong, China
| | - Wai-Yin Poon
- 3 Department of Statistics, The Chinese University of Hong Kong, Hong Kong, China
| |
Collapse
|
28
|
Yan F, Robert M, Li Y. Statistical methods and common problems in medical or biomedical science research. Int J Physiol Pathophysiol Pharmacol 2017; 9:157-163. [PMID: 29209453 PMCID: PMC5698693] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Accepted: 10/16/2017] [Indexed: 06/07/2023]
Abstract
Statistical thinking is crucial for studies in medical and biomedical areas. There are several pitfalls of using statistics in these areas involving in experimental design, data collection, data analysis and data interpretation. This review paper describes basic statistical design problems in biomedical or medical studies and directs the basic scientists to better use of statistical thinking. The contents of this paper were based on previous literatures and our daily basic support work. It includes the sample size determination and sample allocation in experimental design stage, numerical and graphical data summarization, and statistical test methods as well as the related common errors at design and analytic stages. Literatures and our daily support works show that misunderstanding and misusing of statistical concept and statistical test methods are significant problems. These may include ignoring the sample size and data distribution, incorrect summarization measurement, wrong statistical test methods especially for repeated measures, ignoring the assumption for t-test or ANOVA test, failing to perform the adjustment for multiple comparison. This review intends to help the researchers in basic medical or biomedical areas to enhance statistical thinking and make fewer errors in study design and analysis of their studies.
Collapse
Affiliation(s)
- Fengxia Yan
- Department of Community Health and Preventive Medicine, Morehouse School of MedicineAtlanta, GA, USA
| | - Mayberry Robert
- Department of Community Health and Preventive Medicine, Morehouse School of MedicineAtlanta, GA, USA
| | | |
Collapse
|
29
|
Chiang C, Hsiao CF. An approach for sample size determination of average bioequivalence based on interval estimation. Stat Med 2017; 36:1068-1082. [PMID: 28070984 DOI: 10.1002/sim.7202] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Revised: 11/21/2016] [Accepted: 12/01/2016] [Indexed: 11/06/2022]
Abstract
In 1992, the US Food and Drug Administration declared that two drugs demonstrate average bioequivalence (ABE) if the log-transformed mean difference of pharmacokinetic responses lies in (-0.223, 0.223). The most widely used approach for assessing ABE is the two one-sided tests procedure. More specifically, ABE is concluded when a 100(1 - 2α) % confidence interval for mean difference falls within (-0.223, 0.223). As known, bioequivalent studies are usually conducted by crossover design. However, in the case that the half-life of a drug is long, a parallel design for the bioequivalent study may be preferred. In this study, a two-sided interval estimation - such as Satterthwaite's, Cochran-Cox's, or Howe's approximations - is used for assessing parallel ABE. We show that the asymptotic joint distribution of the lower and upper confidence limits is bivariate normal, and thus the sample size can be calculated based on the asymptotic power so that the confidence interval falls within (-0.223, 0.223). Simulation studies also show that the proposed method achieves sufficient empirical power. A real example is provided to illustrate the proposed method. Copyright © 2017 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Chieh Chiang
- Institute of Population Health Sciences, National Health Research Institutes, Zhunan, Taiwan
| | - Chin-Fu Hsiao
- Institute of Population Health Sciences, National Health Research Institutes, Zhunan, Taiwan
| |
Collapse
|
30
|
Usami S. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model. Psychometrika 2017; 82:133-157. [PMID: 27804079 DOI: 10.1007/s11336-016-9532-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2014] [Revised: 07/09/2016] [Indexed: 06/06/2023]
Abstract
Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.
Collapse
|
31
|
Abstract
In conventional frequentist power analysis, one often uses an effect size estimate, treats it as if it were the true value, and ignores uncertainty in the effect size estimate for the analysis. The resulting sample sizes can vary dramatically depending on the chosen effect size value. To resolve the problem, we propose a hybrid Bayesian power analysis procedure that models uncertainty in the effect size estimates from a meta-analysis. We use observed effect sizes and prior distributions to obtain the posterior distribution of the effect size and model parameters. Then, we simulate effect sizes from the obtained posterior distribution. For each simulated effect size, we obtain a power value. With an estimated power distribution for a given sample size, we can estimate the probability of reaching a power level or higher and the expected power. With a range of planned sample sizes, we can generate a power assurance curve. Both the conventional frequentist and our Bayesian procedures were applied to conduct prospective power analyses for two meta-analysis examples (testing standardized mean differences in example 1 and Pearson's correlations in example 2). The advantages of our proposed procedure are demonstrated and discussed.
Collapse
Affiliation(s)
- Han Du
- a University of Notre Dame
| | | |
Collapse
|
32
|
Abstract
Cross-sectional prevalent cohort design has drawn considerable interests in the studies of association between risk factors and time-to-event outcome. The sampling scheme in such design gives rise to length-biased data that require specialized analysis strategy but can improve study efficiency. The power and sample size calculation methods are however lacking for studies with prevalent cohort design, and using the formula developed for traditional survival data may overestimate sample size. We derive the sample size formulas that are appropriate for the design of cross-sectional prevalent cohort studies, under the assumptions of exponentially distributed event time and uniform follow-up for cross-sectional prevalent cohort design. We perform numerical and simulation studies to compare the sample size requirements for achieving the same power between prevalent cohort and incident cohort designs. We also use a large prospective prevalent cohort study to demonstrate the procedure. Using rigorous designs and proper analysis tools, the prospective prevalent cohort design can be more efficient than the incident cohort design with the same total sample sizes and study durations.
Collapse
Affiliation(s)
- Hao Liu
- 1 Division of Biostatistics, Dan L. Duncan Cancer Center, Baylor College of Medicine, Houston, USA
| | - Yu Shen
- 2 Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, USA
| | - Jing Ning
- 2 Department of Biostatistics, The University of Texas MD Anderson Cancer Center, Houston, USA
| | - Jing Qin
- 3 Biostatistics Research Branch, National Institute of Allergy and Infectious Diseases, National Institutes of Health, Bethesda, USA
| |
Collapse
|
33
|
Delorme P, de Micheaux PL, Liquet B, Riou J. Type-II generalized family-wise error rate formulas with application to sample size determination. Stat Med 2016; 35:2687-714. [PMID: 26914402 DOI: 10.1002/sim.6909] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2015] [Revised: 01/13/2016] [Accepted: 01/16/2016] [Indexed: 11/12/2022]
Abstract
Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd.
Collapse
Affiliation(s)
- Phillipe Delorme
- Département de Mathématiques et de Statistique, Université de Montreal, 2920 Chemin de la Tour, Montréal, H3T 1J4, Québec, Canada
| | - Pierre Lafaye de Micheaux
- Département de Mathématiques et de Statistique, Université de Montreal, 2920 Chemin de la Tour, Montréal, H3T 1J4, Québec, Canada.,CREST-ENSAI, Campus de Ker-Lann, Rue Blaise Pascal, BP 37203, Bruz Cedex, 35172, France
| | - Benoit Liquet
- Laboratoire de Mathématiques et de leurs Applications, Université de Pau et des Pays de l'Adour, UMR CNRS, Pau, 5142, France.,ARC Centre of Excellence for Mathematical and Statistical Frontiers, Queensland University of Technology (QUT), Brisbane, Australia
| | | |
Collapse
|
34
|
Abstract
The concept of controlling familywise type I and type II errors at the same time is essentially an integrated process to deal with multiplicity issues in clinical trials. The process will select a multiple testing procedure (MTP) which controls the familywise type I error and calculate the per hypothesis sample size such that the "studywise power" is maintained at desired level. The power of a study can be defined in several ways and it depends on the objective. In this article, we provide general guidance on how to make the selection of MTPs and calculate sample size simultaneously. We introduce the concept of strong and weak control of the familywise type II error and generalized familywise type II error. We also proposed the novel Bonferroni+ and optimal Bonferroni+ procedures to allocate per hypothesis type II error. We demonstrated the value of the proposed work as it cannot be replaced by simple simulations. A real clinical trial is discussed throughout the article as an example.
Collapse
Affiliation(s)
- Bushi Wang
- a Biostatistics & Data Sciences, Boehringer Ingelheim Pharmaceuticals, Inc. , Ridgefield , Connecticut , USA
| | - Naitee Ting
- a Biostatistics & Data Sciences, Boehringer Ingelheim Pharmaceuticals, Inc. , Ridgefield , Connecticut , USA
| |
Collapse
|
35
|
Wiedermann CJ, Wiedermann W. Beautiful small: Misleading large randomized controlled trials? The example of colloids for volume resuscitation. J Anaesthesiol Clin Pharmacol 2015; 31:394-400. [PMID: 26330723 PMCID: PMC4541191 DOI: 10.4103/0970-9185.161680] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
In anesthesia and intensive care, treatment benefits that were claimed on the basis of small or modest-sized trials have repeatedly failed to be confirmed in large randomized controlled trials. A well-designed small trial in a homogeneous patient population with high event rates could yield conclusive results; however, patient populations in anesthesia and intensive care are typically heterogeneous because of comorbidities. The size of the anticipated effects of therapeutic interventions is generally low in relation to relevant endpoints. For regulatory purposes, trials are required to demonstrate efficacy in clinically important endpoints, and therefore must be large because clinically important study endpoints such as death, sepsis, or pneumonia are dichotomous and infrequently occur. The rarer endpoint events occur in the study population; that is, the lower the signal-to-noise ratio, the larger the trials must be to prevent random events from being overemphasized. In addition to trial design, sample size determination on the basis of event rates, clinically meaningful risk ratio reductions and actual patient numbers studied are among the most important characteristics when interpreting study results. Trial size is a critical determinant of generalizability of study results to larger or general patient populations. Typical characteristics of small single-center studies responsible for their known fragility include low variability of outcome measures for surrogate parameters and selective publication and reporting. For anesthesiology and intensive care medicine, findings in volume resuscitation research on intravenous infusion of colloids exemplify this, since both the safety of albumin infusion and the adverse effects of the artificial colloid hydroxyethyl starch have been confirmed only in large-sized trials.
Collapse
Affiliation(s)
- Christian J Wiedermann
- Department of Internal Medicine, Central Hospital of Bolzano, Teaching Hospital of the Medical University of Innsbruck, Bolzano, Italy
| | - Wolfgang Wiedermann
- Department of Psychology, Unit of Quantitative Methods, University of Vienna, Vienna, Austria
- Department of Educational, School and Counseling Psychology, College of Education, University of Missouri, Columbia, MO, USA
| |
Collapse
|
36
|
Abstract
Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.
Collapse
Affiliation(s)
- Seongho Kim
- 1 Biostatistics Core, Karmanos Cancer Institute, Wayne State University, Detroit, MI 48201, USA.,2 Department of Oncology, School of Medicine, Wayne State University, Detroit, MI 48201, USA
| | - Elisabeth Heath
- 2 Department of Oncology, School of Medicine, Wayne State University, Detroit, MI 48201, USA
| | - Lance Heilbrun
- 1 Biostatistics Core, Karmanos Cancer Institute, Wayne State University, Detroit, MI 48201, USA.,2 Department of Oncology, School of Medicine, Wayne State University, Detroit, MI 48201, USA
| |
Collapse
|
37
|
Abstract
The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki.
Collapse
|
38
|
Lin Y, Kwong KS, Cheung SH, Poon WY. Step-up testing procedure for multiple comparisons with a control for a latent variable model with ordered categorical responses. Stat Med 2014; 33:3629-38. [PMID: 24757077 DOI: 10.1002/sim.6190] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2013] [Revised: 01/08/2014] [Accepted: 04/07/2014] [Indexed: 11/11/2022]
Abstract
In clinical studies, multiple comparisons of several treatments to a control with ordered categorical responses are often encountered. A popular statistical approach to analyzing the data is to use the logistic regression model with the proportional odds assumption. As discussed in several recent research papers, if the proportional odds assumption fails to hold, the undesirable consequence of an inflated familywise type I error rate may affect the validity of the clinical findings. To remedy the problem, a more flexible approach that uses the latent normal model with single-step and stepwise testing procedures has been recently proposed. In this paper, we introduce a step-up procedure that uses the correlation structure of test statistics under the latent normal model. A simulation study demonstrates the superiority of the proposed procedure to all existing testing procedures. Based on the proposed step-up procedure, we derive an algorithm that enables the determination of the total sample size and the sample size allocation scheme with a pre-determined level of test power before the onset of a clinical trial. A clinical example is presented to illustrate our proposed method.
Collapse
Affiliation(s)
- Yueqiong Lin
- School of Economics and Management, Fuzhou University, Fuzhou, China
| | | | | | | |
Collapse
|
39
|
La Belle JT, Engelschall E, Lan K, Shah P, Saez N, Maxwell S, Adamson T, Abou-Eid M, McAferty K, Patel DR, Cook CB. A Disposable Tear Glucose Biosensor-Part 4: Preliminary Animal Model Study Assessing Efficacy, Safety, and Feasibility. J Diabetes Sci Technol 2014; 8:109-116. [PMID: 24876546 PMCID: PMC4454120 DOI: 10.1177/1932296813511741] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE A prototype tear glucose (TG) sensor was tested in New Zealand white rabbits to assess eye irritation, blood glucose (BG) and TG lag time, and correlation with BG. METHODS A total of 4 animals were used. Eye irritation was monitored by Lissamine green dye and analyzed using image analysis software. Lag time was correlated with an oral glucose load while recording TG and BG readings. Correlation between TG and BG were plotted against one another to form a correlation diagram, using a Yellow Springs Instrument (YSI) and self-monitoring of blood glucose as the reference measurements. Finally, TG levels were calculated using analytically derived expressions. RESULTS From repeated testing carried over the course of 12 months, little to no eye irritation was detected. TG fluctuations over time visually appeared to trace the same pattern as BG with an average lag times of 13 minutes. TG levels calculated from the device current measurements ranged from 4 to 20 mg/dL and correlated linearly with BG levels of 75-160 mg/dL (TG = 0.1723 BG = 7.9448 mg/dL; R2 = .7544). CONCLUSION The first steps were taken toward preliminary development of a sensor for self-monitoring of tear glucose (SMTG). No conjunctival irritation in any of the animals was noted. Lag time between TG and BG was found to be noticeable, but a quantitative modeling to correlate lag time in this study is unnecessary. Measured currents from the sensors and the calculated TG showed promising correlation to BG levels. Previous analytical bench marking showed BG and TG levels consistent with other literature.
Collapse
Affiliation(s)
- Jeffrey T La Belle
- Biodesign Institute, Arizona State University, Tempe, AZ, USA Harrington Program of Biomedical Engineering, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA
| | - Erica Engelschall
- Biodesign Institute, Arizona State University, Tempe, AZ, USA Harrington Program of Biomedical Engineering, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA
| | - Kenneth Lan
- Biodesign Institute, Arizona State University, Tempe, AZ, USA Harrington Program of Biomedical Engineering, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA
| | - Pankti Shah
- Biodesign Institute, Arizona State University, Tempe, AZ, USA Harrington Program of Biomedical Engineering, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA
| | - Neil Saez
- Biodesign Institute, Arizona State University, Tempe, AZ, USA Harrington Program of Biomedical Engineering, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA
| | - Stephanie Maxwell
- Biodesign Institute, Arizona State University, Tempe, AZ, USA Harrington Program of Biomedical Engineering, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA
| | - Teagan Adamson
- Biodesign Institute, Arizona State University, Tempe, AZ, USA Harrington Program of Biomedical Engineering, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA
| | - Michelle Abou-Eid
- Biodesign Institute, Arizona State University, Tempe, AZ, USA Harrington Program of Biomedical Engineering, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA
| | - Kenyon McAferty
- Biodesign Institute, Arizona State University, Tempe, AZ, USA Harrington Program of Biomedical Engineering, School of Biological and Health Systems Engineering, Arizona State University, Tempe, AZ, USA
| | | | - Curtiss B Cook
- Divisions of Endocrinology and of Preventive, Occupational, and Aerospace Medicine, Mayo Clinic, Scottsdale, AZ, USA
| |
Collapse
|
40
|
Abstract
Odds ratios are frequently used for estimating the effect of an exposure on the probability of disease in case-control studies. In planning such studies, methods for sample size determination are required to ensure sufficient accuracy in estimating odds ratios once the data are collected. Often, the exposure used in epidemiologic studies is not perfectly ascertained. This can arise from recall bias, the use of a proxy exposure measurement, uncertain work exposure history, and laboratory or other errors. The resulting misclassification can have large impacts on the accuracy and precision of estimators, and specialized estimation techniques have been developed to adjust for these biases. However, much less work has been done to account for the anticipated decrease in the precision of estimators at the design stage. Here, we develop methods for sample size determination for odds ratios in the presence of exposure misclassification by using several interval-based Bayesian criteria. By using a series of prototypical examples, we compare sample size requirements after adjustment for misclassification with those required when this problem is ignored. We illustrate the methods by planning a case-control study of the effect of late introduction of peanut to the diet of children to the subsequent development of peanut allergy.
Collapse
|
41
|
Welton NJ, Madan JJ, Caldwell DM, Peters TJ, Ades AE. Expected value of sample information for multi-arm cluster randomized trials with binary outcomes. Med Decis Making 2013; 34:352-65. [PMID: 24085289 DOI: 10.1177/0272989x13501229] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Expected value of sample information (EVSI) measures the anticipated net benefit gained from conducting new research with a specific design to add to the evidence on which reimbursement decisions are made. Cluster randomized trials raise specific issues for EVSI calculations because 1) a hierarchical model is necessary to account for between-cluster variability when incorporating new evidence and 2) heterogeneity between clusters needs to be carefully characterized in the cost-effectiveness analysis model. Multi-arm trials provide parameter estimates that are correlated, which needs to be accounted for in EVSI calculations. Furthermore, EVSI is computationally intensive when the net benefit function is nonlinear, due to the need for an inner-simulation step. We develop a method for the computation of EVSI that avoids the inner simulation step for cluster randomized multi-arm trials with a binary outcome, where the net benefit function is linear in the probability of an event but nonlinear in the log-odds ratio parameters. We motivate and illustrate the method with an example of a cluster randomized 2 × 2 factorial trial for interventions to increase attendance at breast screening in the UK, using a previously reported cost-effectiveness model. We highlight assumptions made in our approach, extensions to individually randomized trials and inclusion of covariates, and areas for further developments. We discuss computation time, the research-design space, and the ethical implications of an EVSI approach. We suggest that EVSI is a practical and appropriate tool for the design of cluster randomized trials.
Collapse
Affiliation(s)
- Nicky J Welton
- School of Social and Community Medicine, University of Bristol, Bristol, UK (NJW, JJM, DMC, AEA)
| | - Jason J Madan
- School of Social and Community Medicine, University of Bristol, Bristol, UK (NJW, JJM, DMC, AEA)
| | - Deborah M Caldwell
- School of Social and Community Medicine, University of Bristol, Bristol, UK (NJW, JJM, DMC, AEA)
| | - Tim J Peters
- School of Clinical Sciences, University of Bristol, Bristol, UK (TJP)
| | - Anthony E Ades
- School of Social and Community Medicine, University of Bristol, Bristol, UK (NJW, JJM, DMC, AEA)
| |
Collapse
|
42
|
Li CI, Su PF, Guo Y, Shyr Y. Sample size calculation for differential expression analysis of RNA-seq data under Poisson distribution. Int J Comput Biol Drug Des 2013; 6:358-75. [PMID: 24088268 PMCID: PMC3874726 DOI: 10.1504/ijcbdd.2013.056830] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Sample size determination is an important issue in the experimental design of biomedical research. Because of the complexity of RNA-seq experiments, however, the field currently lacks a sample size method widely applicable to differential expression studies utilising RNA-seq technology. In this report, we propose several methods for sample size calculation for single-gene differential expression analysis of RNA-seq data under Poisson distribution. These methods are then extended to multiple genes, with consideration for addressing the multiple testing problem by controlling false discovery rate. Moreover, most of the proposed methods allow for closed-form sample size formulas with specification of the desired minimum fold change and minimum average read count, and thus are not computationally intensive. Simulation studies to evaluate the performance of the proposed sample size formulas are presented; the results indicate that our methods work well, with achievement of desired power. Finally, our sample size calculation methods are applied to three real RNA-seq data sets.
Collapse
Affiliation(s)
- Chung-I Li
- Center for Quantitative Sciences, Vanderbilt University, 571 Preston Building Nashville, TN, USA
| | - Pei-Fang Su
- Center for Quantitative Sciences, Vanderbilt University, 571 Preston Building Nashville, TN, USA
| | - Yan Guo
- Center for Quantitative Sciences, Vanderbilt University, 571 Preston Building Nashville, TN, USA
| | - Yu Shyr
- Center for Quantitative Sciences, Vanderbilt University, 571 Preston Building Nashville, TN, USA
| |
Collapse
|
43
|
Pezeshk H, Nematollahi N, Maroufy V, Marriott P, Gittins J. Bayesian sample size calculation for estimation of the difference between two binomial proportions. Stat Methods Med Res 2011; 22:598-611. [PMID: 21436190 DOI: 10.1177/0962280211399562] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In this study, we discuss a decision theoretic or fully Bayesian approach to the sample size question in clinical trials with binary responses. Data are assumed to come from two binomial distributions. A Dirichlet distribution is assumed to describe prior knowledge of the two success probabilities p1 and p2. The parameter of interest is p = p1 - p2. The optimal size of the trial is obtained by maximising the expected net benefit function. The methodology presented in this article extends previous work by the assumption of dependent prior distributions for p1 and p2.
Collapse
Affiliation(s)
- Hamid Pezeshk
- 1School of Mathematics, Statistics and Computer Science and Center of Excellence in Biomathematics, University of Tehran, Tehran, Iran
| | | | | | | | | |
Collapse
|