1
|
Bayesian analysis of joint quantile regression for multi-response longitudinal data with application to primary biliary cirrhosis sequential cohort study. Stat Methods Med Res 2024:9622802241247725. [PMID: 38676359 DOI: 10.1177/09622802241247725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/28/2024]
Abstract
This article proposes a Bayesian approach for jointly estimating marginal conditional quantiles of multi-response longitudinal data with multivariate mixed effects model. The multivariate asymmetric Laplace distribution is employed to construct the working likelihood of the considered model. Penalization priors on regression parameters are incorporated into the working likelihood to conduct Bayesian high-dimensional inference. Markov chain Monte Carlo algorithm is used to obtain the fully conditional posterior distributions of all parameters and latent variables. Monte Carlo simulations are conducted to evaluate the sample performance of the proposed joint quantile regression approach. Finally, we analyze a longitudinal medical dataset of the primary biliary cirrhosis sequential cohort study to illustrate the real application of the proposed modeling method.
Collapse
|
2
|
Sample size determination for interval estimation of the prevalence of a sensitive attribute under non-randomized response models. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2024. [PMID: 38409814 DOI: 10.1111/bmsp.12338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2023] [Revised: 01/30/2024] [Accepted: 02/01/2024] [Indexed: 02/28/2024]
Abstract
A sufficient number of participants should be included to adequately address the research interest in the surveys with sensitive questions. In this paper, sample size formulas/iterative algorithms are developed from the perspective of controlling the confidence interval width of the prevalence of a sensitive attribute under four non-randomized response models: the crosswise model, parallel model, Poisson item count technique model and negative binomial item count technique model. In contrast to the conventional approach for sample size determination, our sample size formulas/algorithms explicitly incorporate an assurance probability of controlling the width of a confidence interval within the pre-specified range. The performance of the proposed methods is evaluated with respect to the empirical coverage probability, empirical assurance probability and confidence width. Simulation results show that all formulas/algorithms are effective and hence are recommended for practical applications. A real example is used to illustrate the proposed methods.
Collapse
|
3
|
[Bibliometric and visual analysis of pneumoconiosis based on Cite Space]. ZHONGHUA LAO DONG WEI SHENG ZHI YE BING ZA ZHI = ZHONGHUA LAODONG WEISHENG ZHIYEBING ZAZHI = CHINESE JOURNAL OF INDUSTRIAL HYGIENE AND OCCUPATIONAL DISEASES 2024; 42:34-41. [PMID: 38311947 DOI: 10.3760/cma.j.cn121094-20220630-000350] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 02/06/2024]
Abstract
Objective: Through the bibliometrics analysis and visual analysis of Chinese and English literature related to pneumoconiosis through CiteSpace, to understand the research situation, research trend and hotspots of pneumoconiosis, so as to provide reference for further research. Methods: In August 2022, CNKI (China National Knowledge Infrastructure) data baseand Web of Science core collection database were used as data sources for literature retrieval. Cite Space.5.8.R3c software was used to analyze the cooperation between authors and institutions, keyword co-occurrence analysis, keyword clustering analysis and keyword emergence analysis. Results: A total of 4726 Chinese literature and 2490 English literature related to pneumoconiosis were included; The annual publication volume of Chinese literature shows a fluctuating downward trend, while the annual publication volume of English literature shows a fluctuating upward trend. The Institute of Labor Health and Occupational Disease of the Chinese Academy of Preventive Medical Sciences and the Institute of Occupational Health and Poisoning Control of the Chinese Center for Disease Control and Prevention have the highest publication volume (55 articles) in the institutional cooperation network; The National Institute for Occupational Safety and Health (NIOSH) in the United States has the highest publication volume (153 articles) in the institutional collaboration network. The results of keyword co-occurrence, clustering, and prominence analysis show that Chinese literature focuses more on clinical research on pneumoconiosis, while English literature focuses more on experimental research related to the pathogenesis of pneumoconiosis. Conclusion: In the related field of pneumoconiosis research, the experimental research and clinical research on the pathogenesis are the main research hotspots.
Collapse
|
4
|
A Bayesian multistage spatio-temporally dependent model for spatial clustering and variable selection. Stat Med 2023; 42:4794-4823. [PMID: 37652405 DOI: 10.1002/sim.9889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 06/30/2023] [Accepted: 08/13/2023] [Indexed: 09/02/2023]
Abstract
In spatio-temporal epidemiological analysis, it is of critical importance to identify the significant covariates and estimate the associated time-varying effects on the health outcome. Due to the heterogeneity of spatio-temporal data, the subsets of important covariates may vary across space and the temporal trends of covariate effects could be locally different. However, many spatial models neglected the potential local variation patterns, leading to inappropriate inference. Thus, this article proposes a flexible Bayesian hierarchical model to simultaneously identify spatial clusters of regression coefficients with common temporal trends, select significant covariates for each spatial group by introducing binary entry parameters and estimate spatio-temporally varying disease risks. A multistage strategy is employed to reduce the confounding bias caused by spatially structured random components. A simulation study demonstrates the outperformance of the proposed method, compared with several alternatives based on different assessment criteria. The methodology is motivated by two important case studies. The first concerns the low birth weight incidence data in 159 counties of Georgia, USA, for the years 2007 to 2018 and investigates the time-varying effects of potential contributing covariates in different cluster regions. The second concerns the circulatory disease risks across 323 local authorities in England over 10 years and explores the underlying spatial clusters and associated important risk factors.
Collapse
|
5
|
Accelerating L1-penalized expectation maximization algorithm for latent variable selection in multidimensional two-parameter logistic models. PLoS One 2023; 18:e0279918. [PMID: 36649269 PMCID: PMC9844851 DOI: 10.1371/journal.pone.0279918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 12/16/2022] [Indexed: 01/18/2023] Open
Abstract
One of the main concerns in multidimensional item response theory (MIRT) is to detect the relationship between observed items and latent traits, which is typically addressed by the exploratory analysis and factor rotation techniques. Recently, an EM-based L1-penalized log-likelihood method (EML1) is proposed as a vital alternative to factor rotation. Based on the observed test response data, EML1 can yield a sparse and interpretable estimate of the loading matrix. However, EML1 suffers from high computational burden. In this paper, we consider the coordinate descent algorithm to optimize a new weighted log-likelihood, and consequently propose an improved EML1 (IEML1) which is more than 30 times faster than EML1. The performance of IEML1 is evaluated through simulation studies and an application on a real data set related to the Eysenck Personality Questionnaire is used to demonstrate our methodologies.
Collapse
|
6
|
Sample Size Determination for Interval Estimation of the Prevalence of a Sensitive Attribute Under Randomized Response Models. PSYCHOMETRIKA 2022; 87:1361-1389. [PMID: 35306631 PMCID: PMC9636124 DOI: 10.1007/s11336-022-09854-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 01/23/2022] [Accepted: 02/02/2022] [Indexed: 06/14/2023]
Abstract
Studies with sensitive questions should include a sufficient number of respondents to adequately address the research interest. While studies with an inadequate number of respondents may not yield significant conclusions, studies with an excess of respondents become wasteful of investigators' budget. Therefore, it is an important step in survey sampling to determine the required number of participants. In this article, we derive sample size formulas based on confidence interval estimation of prevalence for four randomized response models, namely, the Warner's randomized response model, unrelated question model, item count technique model and cheater detection model. Specifically, our sample size formulas control, with a given assurance probability, the width of a confidence interval within the planned range. Simulation results demonstrate that all formulas are accurate in terms of empirical coverage probabilities and empirical assurance probabilities. All formulas are illustrated using a real-life application about the use of unethical tactics in negotiation.
Collapse
|
7
|
[Influence of age on advanced neoplasia detection in colorectal cancer screening in population at high risk]. ZHONGHUA LIU XING BING XUE ZA ZHI = ZHONGHUA LIUXINGBINGXUE ZAZHI 2022; 43:1282-1287. [PMID: 35981991 DOI: 10.3760/cma.j.cn112338-20211220-01002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Objective: To compare the detection rate of advanced neoplasia and the number of people needing endoscopy in colorectal cancer screening giving at different starting age in population at high risk. Methods: Based on the screening project of early diagnosis and treatment of colorectal cancer in Jiashan county, Zhejiang province, two rounds of colorectal cancer screening were conducted between January 2007 and December 2020. After excluding participants who were not at high risk or had incomplete information, 27 130 participants and 31 205 participants were finally enrolled in round one and in round two, respectively. The spline analysis based on the generalized additive model was used to describe the trend of detection rate of advanced neoplasia with age. The detection rate and number of people needing endoscopy for the groups with starting age at 50, 45 and 40 years were calculated, and the differences in the detection rate were tested by χ2 goodness of fit test. Results: A total of 21 077 (77.69%) participants in round one and 25 249 (80.91%) participants in round two received endoscopy, in whom 1 097 (detection rate=52.05‰) and 1 151 (detection rate=45.59‰) had advanced neoplasia (cancers and advanced adenomas), respectively. The detection rate increased significantly with age, and the detection rate in round one were significantly higher than that in round two (P<0.05). The overall detection rates of advanced neoplasia for the groups with starting age at 50, 45 and 40 years were 61.11‰, 56.14‰ and 52.05‰ in round one, and 49.10‰, 46.75‰ and 45.59‰ in round two, respectively. The rates were significantly higher for the group with starting age at 50 years than that with starting age at 40 years in both round one and round two (P<0.05). The numbers of people needing endoscopy of advanced neoplasia for the groups with starting age at 50, 45 and 40 years were 17, 18, and 20 in round one, and 21, 22 and 22 in round two. Conclusions: The detection rate of advanced neoplasia increased with age. Starting screening at lower age might contribute to decreased detection rate and increased number of people needing endoscopy. However, the difference was limited.
Collapse
|
8
|
[Influence of age on advanced neoplasia detection in colorectal cancer screening in population at high risk]. ZHONGHUA LIU XING BING XUE ZA ZHI = ZHONGHUA LIUXINGBINGXUE ZAZHI 2022. [PMID: 35981991 DOI: 10.3760/cma.j.cn112338-20211220-0100229] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 05/01/2023]
Abstract
Objective: To compare the detection rate of advanced neoplasia and the number of people needing endoscopy in colorectal cancer screening giving at different starting age in population at high risk. Methods: Based on the screening project of early diagnosis and treatment of colorectal cancer in Jiashan county, Zhejiang province, two rounds of colorectal cancer screening were conducted between January 2007 and December 2020. After excluding participants who were not at high risk or had incomplete information, 27 130 participants and 31 205 participants were finally enrolled in round one and in round two, respectively. The spline analysis based on the generalized additive model was used to describe the trend of detection rate of advanced neoplasia with age. The detection rate and number of people needing endoscopy for the groups with starting age at 50, 45 and 40 years were calculated, and the differences in the detection rate were tested by χ2 goodness of fit test. Results: A total of 21 077 (77.69%) participants in round one and 25 249 (80.91%) participants in round two received endoscopy, in whom 1 097 (detection rate=52.05‰) and 1 151 (detection rate=45.59‰) had advanced neoplasia (cancers and advanced adenomas), respectively. The detection rate increased significantly with age, and the detection rate in round one were significantly higher than that in round two (P<0.05). The overall detection rates of advanced neoplasia for the groups with starting age at 50, 45 and 40 years were 61.11‰, 56.14‰ and 52.05‰ in round one, and 49.10‰, 46.75‰ and 45.59‰ in round two, respectively. The rates were significantly higher for the group with starting age at 50 years than that with starting age at 40 years in both round one and round two (P<0.05). The numbers of people needing endoscopy of advanced neoplasia for the groups with starting age at 50, 45 and 40 years were 17, 18, and 20 in round one, and 21, 22 and 22 in round two. Conclusions: The detection rate of advanced neoplasia increased with age. Starting screening at lower age might contribute to decreased detection rate and increased number of people needing endoscopy. However, the difference was limited.
Collapse
|
9
|
Exploring the risk factors of COVID-19 Delta variant in the USA based on Bayesian spatio-temporal analysis. Transbound Emerg Dis 2022; 69:e2731-e2744. [PMID: 35751843 PMCID: PMC9349916 DOI: 10.1111/tbed.14623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2021] [Revised: 05/24/2022] [Accepted: 06/04/2022] [Indexed: 12/02/2022]
Abstract
The transmission of coronavirus disease‐2019 (COVID‐19) epidemic is a global emergency, which is worsened by the genetic mutations of SARS‐CoV‐2. However, till date, few statistical studies have researched the COVID‐19 spread patterns in terms of the variant cases. Hence, this paper aims to explore the associated risk factors of Delta variant, the most contagious strain of COVID‐19. The study collected the state‐level COVID‐19 Delta variant cases in the United States during a 12‐week period and included potential environmental, socioeconomic, and public prevention factors as independent variables. Instead of regarding the covariate effects as constant, this paper proposes a flexible Bayesian hierarchical model with spatio‐temporally varying coefficients to account for data heterogeneity. The method enables us to cluster the states into distinctive groups based on the temporal trends of the coefficients and simultaneously identify significant risk factors for each cluster. The findings contribute novel insight into the dynamics of covariate effects on the COVID‐19 Delta variant over space and time, which could help the government develop targeted prevention measures for vulnerable regions based on the selected risk factors.
Collapse
|
10
|
Confidence interval construction for proportion difference from partially validated series with two fallible classifiers. J Biopharm Stat 2022; 32:871-896. [PMID: 35536693 DOI: 10.1080/10543406.2022.2058527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
This article investigates the confidence interval (CI) construction of proportion difference for two independent partially validated series under the double-sampling scheme in which both classifiers are fallible. Several CIs based on the variance estimates recovery method of combining confidence limits from asymptotic, bootstrap, and Bayesian methods for two independent binomial proportions are developed under two models. Simulation results show that all CIs except for the bootstrap percentile-t CI and Bayesian credible interval with uniform prior under the independence model and all CIs under the dependence model generally perform well and are recommended. Two examples are used to illustrate the methodologies.
Collapse
|
11
|
Latent variable sdelection in multidimensional item response theory models using the expectation model selection algorithm. THE BRITISH JOURNAL OF MATHEMATICAL AND STATISTICAL PSYCHOLOGY 2022; 75:363-394. [PMID: 34918834 DOI: 10.1111/bmsp.12261] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2021] [Revised: 10/22/2021] [Indexed: 06/14/2023]
Abstract
The aim of latent variable selection in multidimensional item response theory (MIRT) models is to identify latent traits probed by test items of a multidimensional test. In this paper the expectation model selection (EMS) algorithm proposed by Jiang et al. (2015) is applied to minimize the Bayesian information criterion (BIC) for latent variable selection in MIRT models with a known number of latent traits. Under mild assumptions, we prove the numerical convergence of the EMS algorithm for model selection by minimizing the BIC of observed data in the presence of missing data. For the identification of MIRT models, we assume that the variances of all latent traits are unity and each latent trait has an item that is only related to it. Under this identifiability assumption, the convergence of the EMS algorithm for latent variable selection in the multidimensional two-parameter logistic (M2PL) models can be verified. We give an efficient implementation of the EMS for the M2PL models. Simulation studies show that the EMS outperforms the EM-based L1 regularization in terms of correctly selected latent variables and computation time. The EMS algorithm is applied to a real data set related to the Eysenck Personality Questionnaire.
Collapse
|
12
|
Nonparametric quantile regression with missing data using local estimating equations. J Nonparametr Stat 2022. [DOI: 10.1080/10485252.2022.2026353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
13
|
Correction to: Bayesian joint inference for multivariate quantile regression model with L1/2 penalty. Comput Stat 2021. [DOI: 10.1007/s00180-021-01168-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
14
|
Likelihood-based methods for the zero-one-two inflated Poisson model with applications to biomedicine. J STAT COMPUT SIM 2021. [DOI: 10.1080/00949655.2021.1970162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
15
|
[Association between sleep and prevalence of hypertension in elderly population]. ZHONGHUA LIU XING BING XUE ZA ZHI = ZHONGHUA LIUXINGBINGXUE ZAZHI 2021; 42:1188-1193. [PMID: 34814529 DOI: 10.3760/cma.j.cn112338-20200512-00713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Objective: To explore the association between sleep duration, sleep quality and the prevalence of hypertension in the elderly aged 65 years and above. Methods: This study was conducted among the elderly in communities in Yiwu, China from April to July, 2019, and participants were recruited through physical examination in the hospital. Face-to-face interview was performed to obtain basic information. Sleep duration and sleep quality were evaluated by Pittsburgh Sleep Quality Index (PSQI). Associations between sleep duration, sleep quality and hypertension were evaluated by multivariate logistic regression analysis. Results: A total of 3 169 elderly persons, aged ≥65 years old, were included in the study. The overall prevalence of hypertension was 50.8%. The elderly with very poor sleep quality and short sleep duration accounted for 22.4% and 28.5%, respectively. After adjusting for demographic characteristics, socioeconomic status, lifestyle and health status, the OR of hypertension for the elderly with very poor sleep quality was 1.42 (95%CI: 1.12-1.80) compared with those with very good sleep quality. Compared with the elderly with sleep duration of 6-7 h a night, the OR of hypertension for those with sleep duration <6 h was 1.37 (95%CI: 1.15-1.65). As the sleep quality decreased, the risk for hypertension increased. An U-shaped association was found between sleep duration and risk of hypertension. Subgroup analyses showed that this association existed in both men and women, but only significant in the elderly aged <75 years. Conclusion: Poor sleep quality and short sleep duration were associated with risk for hypertension in the elderly.
Collapse
|
16
|
A new multivariate t distribution with variant tail weights and its application in robust regression analysis. J Appl Stat 2021; 49:2629-2656. [DOI: 10.1080/02664763.2021.1913106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
17
|
Analysis of the Spread of COVID-19 in the USA with a Spatio-Temporal Multivariate Time Series Model. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 18:E774. [PMID: 33477576 PMCID: PMC7831328 DOI: 10.3390/ijerph18020774] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 01/10/2021] [Accepted: 01/13/2021] [Indexed: 02/07/2023]
Abstract
With the rapid spread of the pandemic due to the coronavirus disease 2019 (COVID-19), the virus has already led to considerable mortality and morbidity worldwide, as well as having a severe impact on economic development. In this article, we analyze the state-level correlation between COVID-19 risk and weather/climate factors in the USA. For this purpose, we consider a spatio-temporal multivariate time series model under a hierarchical framework, which is especially suitable for envisioning the virus transmission tendency across a geographic area over time. Briefly, our model decomposes the COVID-19 risk into: (i) an autoregressive component that describes the within-state COVID-19 risk effect; (ii) a spatiotemporal component that describes the across-state COVID-19 risk effect; (iii) an exogenous component that includes other factors (e.g., weather/climate) that could envision future epidemic development risk; and (iv) an endemic component that captures the function of time and other predictors mainly for individual states. Our results indicate that maximum temperature, minimum temperature, humidity, the percentage of cloud coverage, and the columnar density of total atmospheric ozone have a strong association with the COVID-19 pandemic in many states. In particular, the maximum temperature, minimum temperature, and the columnar density of total atmospheric ozone demonstrate statistically significant associations with the tendency of COVID-19 spreading in almost all states. Furthermore, our results from transmission tendency analysis suggest that the community-level transmission has been relatively mitigated in the USA, and the daily confirmed cases within a state are predominated by the earlier daily confirmed cases within that state compared to other factors, which implies that states such as Texas, California, and Florida with a large number of confirmed cases still need strategies like stay-at-home orders to prevent another outbreak.
Collapse
|
18
|
[Association between lifestyle-related factors and colorectal adenoma]. ZHONGHUA LIU XING BING XUE ZA ZHI = ZHONGHUA LIUXINGBINGXUE ZAZHI 2021; 41:1649-1654. [PMID: 33297621 DOI: 10.3760/cma.j.cn112338-20200414-00572] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Objective: To explore the association between lifestyle-related factors and colorectal adenoma. Methods: Based on the Screening Project of Early Diagnosis and Treatment of Colorectal Cancer in Jiashan county Zhejiang province, from August 2012 to March 2018, information gathered through records on questionnaire and colonoscopic diagnosis were collected from participants with positive results during the primary screening stage. According to the findings of colonoscopy, 11 232 controls without any colorectal diseases and 3 895 cases with colorectal adenoma were included in the study. Multivariate logistic regression models were used to analyze the association between lifestyle-related factors and colorectal adenoma. Results: After adjusting for possible confounding factors, results from multivariate logistic regression analysis showed that smoking, alcohol drinking and obesity were positively related to the risk of colorectal adenoma, with ORs (95%CIs) as 1.38 (1.24-1.54), 1.37 (1.24-1.51) and 1.38 (1.20-1.59) respectively. However, regular aspirin intake was negatively related with the risk of colorectal adenoma (OR=0.65, 95%CI: 0.53-0.80). After stratified by sex and age, data showed that the associations between smoking, alcohol drinking and colorectal adenoma were statistically significant in males, and the association between regular aspirin intake and colorectal adenoma was also statistically significant in older participants (aged 60 years and older). Conclusion: Smoking, alcohol drinking, regular aspirin intake and obesity were associated with colorectal adenoma.
Collapse
|
19
|
[Prevalence and influence factors of job burnout among hospital staffs-a cross-sectional study]. ZHONGHUA LAO DONG WEI SHENG ZHI YE BING ZA ZHI = ZHONGHUA LAODONG WEISHENG ZHIYEBING ZAZHI = CHINESE JOURNAL OF INDUSTRIAL HYGIENE AND OCCUPATIONAL DISEASES 2020; 38:594-597. [PMID: 32892587 DOI: 10.3760/cma.j.cn121094-20200107-00019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Objective: To explore the influencing factors of job burnout of medical staff and provide reference for the formulation of intervention measures. Methods: From November to December, 2018, a questionnaire survey was conducted among medical staff in a general hospital by using the research design of the current situation survey. A total of 1193 questionnaires were distributed and 939 questionnaires were returned, with a recovery rate of 78.7%, including 891 valid questionnaires and an effective recovery rate of 94.9%. Social support rating scale (SSRs) was used to evaluate social support, and Maslach Burnout Scale (MBI-GS) was used to evaluate job burnout. Single factor analysis was performed by chi square test and Fisher exact probability method. To explore the influencing factors of job burnout by using disordered multi classification logistic. Results: The average age was (27.47 ± 4.22) years old, female accounted for 71.5% (637/891) . The total physical examination rate of job burnout was 46.6%. The scores of emotional exhaustion, cynicism and decreased sense of achievement were (10.10±3.75) , (6.14±3.43) , (17.91±4.13) respectively. Multiple logistic regression analysis showed that, compared with the non detected job burnout, the young, working for 1-3 years, average sleep ≤6 hours, and poor social support were more likely to have mild job burnout (OR=0.91, 0.40, 2.25, 2.38, P<0.05) ; female, high night shift frequency in the past year, average sleep ≤6 h. Those with poor social support were more likely to have moderate to severe job burnout (OR=1.59, 2.94, 4.01, 2.40, 3.66, P<0.05) . Conclusion: Corresponding measures should be taken to reduce job burnout and improve work efficiency.
Collapse
|
20
|
Kernel density-based likelihood ratio tests for linear regression models. Stat Med 2020; 40:119-132. [PMID: 33015853 DOI: 10.1002/sim.8765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2019] [Revised: 09/03/2020] [Accepted: 09/05/2020] [Indexed: 11/11/2022]
Abstract
In this article, we develop a so-called profile likelihood ratio test (PLRT) based on the estimated error density for the multiple linear regression model. Unlike the existing likelihood ratio test (LRT), our proposed PLRT does not require any specification on the error distribution. The asymptotic properties are developed and the Wilks phenomenon is studied. Simulation studies are conducted to examine the performance of the PLRT. It is observed that our proposed PLRT generally outperforms the existing LRT, empirical likelihood ratio test and the weighted profile likelihood ratio test in sense that (i) its type I error rates are closer to the prespecified nominal level; (ii) it generally has higher powers; (iii) it performs satisfactorily when moments of the error do not exist (eg, Cauchy distribution); and (iv) it has higher probability of correctly selecting the correct model in the multiple testing problem. A mammalian eye gene expression dataset and a concrete compressive strength dataset are analyzed to illustrate our methodologies.
Collapse
|
21
|
Poisson item count techniques with noncompliance. Stat Med 2020; 39:4480-4498. [PMID: 32909318 DOI: 10.1002/sim.8736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 07/22/2020] [Accepted: 07/25/2020] [Indexed: 11/11/2022]
Abstract
The Poisson item count technique (PICT) is a survey method that was recently developed to elicit respondents' truthful answers to sensitive questions. It simplifies the well-known item count technique (ICT) by replacing a list of independent innocuous questions in known proportions with a single innocuous counting question. However, ICT and PICT both rely on the strong "no design effect assumption" (ie, respondents give the same answers to the innocuous items regardless of the absence or presence of the sensitive item in the list) and "no liar" (ie, all respondents give truthful answers) assumptions. To address the problem of self-protective behavior and provide more reliable analyses, we introduced a noncompliance parameter into the existing PICT. Based on the survey design of PICT, we considered more practical model assumptions and developed the corresponding statistical inferences. Simulation studies were conducted to evaluate the performance of our method. Finally, a real example of automobile insurance fraud was used to demonstrate our method.
Collapse
|
22
|
Variable selection for ultra-high dimensional quantile regression with missing data and measurement error. Stat Methods Med Res 2020; 30:129-150. [PMID: 32746735 DOI: 10.1177/0962280220941533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In this paper, we consider variable selection for ultra-high dimensional quantile regression model with missing data and measurement errors in covariates. Specifically, we correct the bias in the loss function caused by measurement error by applying the orthogonal quantile regression approach and remove the bias caused by missing data using the inverse probability weighting. A nonconvex Atan penalized estimation method is proposed for simultaneous variable selection and estimation. With the proper choice of the regularization parameter and under some relaxed conditions, we show that the proposed estimate enjoys the oracle properties. The choice of smoothing parameters is also discussed. The performance of the proposed variable selection procedure is assessed by Monte Carlo simulation studies. We further demonstrate the proposed procedure with a breast cancer data set.
Collapse
|
23
|
General composite quantile regression: Theory and methods. COMMUN STAT-THEOR M 2020. [DOI: 10.1080/03610926.2019.1568493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
24
|
Variable Screening for Near Infrared (NIR) Spectroscopy Data Based on Ridge Partial Least Squares Regression. Comb Chem High Throughput Screen 2020; 23:740-756. [PMID: 32342803 DOI: 10.2174/1386207323666200428114823] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Revised: 01/17/2020] [Accepted: 02/29/2020] [Indexed: 11/22/2022]
Abstract
AIM AND OBJECTIVE Near Infrared (NIR) spectroscopy data are featured by few dozen to many thousands of samples and highly correlated variables. Quantitative analysis of such data usually requires a combination of analytical methods with variable selection or screening methods. Commonly-used variable screening methods fail to recover the true model when (i) some of the variables are highly correlated, and (ii) the sample size is less than the number of relevant variables. In these cases, Partial Least Squares (PLS) regression based approaches can be useful alternatives. MATERIALS AND METHODS In this research, a fast variable screening strategy, namely the preconditioned screening for ridge partial least squares regression (PSRPLS), is proposed for modelling NIR spectroscopy data with high-dimensional and highly correlated covariates. Under rather mild assumptions, we prove that using Puffer transformation, the proposed approach successfully transforms the problem of variable screening with highly correlated predictor variables to that of weakly correlated covariates with less extra computational effort. RESULTS We show that our proposed method leads to theoretically consistent model selection results. Four simulation studies and two real examples are then analyzed to illustrate the effectiveness of the proposed approach. CONCLUSION By introducing Puffer transformation, high correlation problem can be mitigated using the PSRPLS procedure we construct. By employing RPLS regression to our approach, it can be made more simple and computational efficient to cope with the situation where model size is larger than the sample size while maintaining a high precision prediction.
Collapse
|
25
|
A novel MM algorithm and the mode-sharing method in Bayesian computation for the analysis of general incomplete categorical data. Comput Stat Data Anal 2019. [DOI: 10.1016/j.csda.2019.04.012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
26
|
Comparison of disease prevalence in two populations under double-sampling scheme with two fallible classifiers. J Appl Stat 2019; 47:1375-1401. [DOI: 10.1080/02664763.2019.1679727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
27
|
Variable selection in competing risks models based on quantile regression. Stat Med 2019; 38:4670-4685. [PMID: 31359443 DOI: 10.1002/sim.8326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2018] [Revised: 06/20/2019] [Accepted: 06/21/2019] [Indexed: 01/09/2023]
Abstract
The proportional subdistribution hazard regression model has been widely used by clinical researchers for analyzing competing risks data. It is well known that quantile regression provides a more comprehensive alternative to model how covariates influence not only the location but also the entire conditional distribution. In this paper, we develop variable selection procedures based on penalized estimating equations for competing risks quantile regression. Asymptotic properties of the proposed estimators including consistency and oracle properties are established. Monte Carlo simulation studies are conducted, confirming that the proposed methods are efficient. A bone marrow transplant data set is analyzed to demonstrate our methodologies.
Collapse
|
28
|
Abstract
This paper considers the quantile regression model with both individual fixed effect and time period effect for general spatial panel data. Fixed effects quantile regression estimators based on instrumental variable method will be proposed. Asymptotic properties of the proposed estimators will be developed. Simulations are conducted to study the performance of the proposed method. We will illustrate our methodologies using a cigarettes demand data set.
Collapse
|
29
|
Construction of confidence intervals for the risk differences in stratified design with correlated bilateral data. J Biopharm Stat 2019; 29:446-467. [PMID: 30933654 DOI: 10.1080/10543406.2019.1579222] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
A stratified study is often designed for adjusting a confounding effect or effect of different centers/groups in two treatments or diagnostic tests, and the risk difference is one of the most frequently used indices in comparing efficiency between two treatments or diagnostic tests. This article presented five simultaneous confidence intervals (CIs) for risk differences in stratified bilateral designs accounting for the intraclass correlation and developed seven CIs for the common risk difference under the homogeneity assumption. The performance of the CIs is evaluated with respect to the empirical coverage probabilities, empirical coverage widths and ratios of mesial noncoverage probability and the noncoverage probability under various scenarios. Empirical results show that Wald simultaneous CI, Haldane simultaneous CI, Score simultaneous CI based on Bonferroni method and simultaneous CI based on bootstrap-resampling method perform satisfactorily and hence be recommended for applications, the CI based on the weighted-least-square (WLS) estimator, the CIs based on Mantel-Haenszel estimator, the CI based on Cochran statistic and the CI based on Score statistic for the common risk difference behave well even under small sample sizes. A real data example is used to demonstrate the proposed methodologies.
Collapse
|
30
|
Testing high-dimensional normality based on classical skewness and Kurtosis with a possible small sample size. COMMUN STAT-THEOR M 2018. [DOI: 10.1080/03610926.2018.1520882] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
31
|
Efficient Robust Estimation for Linear Models with Missing Response at Random. Scand Stat Theory Appl 2018; 45:366-381. [PMID: 30078929 DOI: 10.1111/sjos.12296] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Coefficient estimation in linear regression models with missing data is routinely done in the mean regression framework. However, the mean regression theory breaks down if the error variance is infinite. In addition, correct specification of the likelihood function for existing imputation approach is often challenging in practice, especially for skewed data. In this paper, we develop a novel composite quantile regression and a weighted quantile average estimation procedure for parameter estimation in linear regression models when some responses are missing at random. Instead of imputing the missing response by randomly drawing from its conditional distribution, we propose to impute both missing and observed responses by their estimated conditional quantiles given the observed data and to use the parametrically estimated propensity scores to weigh check functions that define a regression parameter. Both estimation procedures are resistant to heavy-tailed errors or outliers in the response and can achieve nice robustness and efficiency. Moreover, we propose adaptive penalization methods to simultaneously select significant variables and estimate unknown parameters. Asymptotic properties of the proposed estimators are carefully investigated. An efficient algorithm is developed for fast implementation of the proposed methodologies. We also discuss a model selection criterion, which is based on an IC Q -type statistic, to select the penalty parameters. The performance of the proposed methods is illustrated via simulated and real data sets.
Collapse
|
32
|
A new multivariate zero-adjusted Poisson model with applications to biomedicine. Biom J 2018; 61:1340-1370. [PMID: 29799138 DOI: 10.1002/bimj.201700144] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2017] [Revised: 12/01/2017] [Accepted: 02/27/2018] [Indexed: 11/12/2022]
Abstract
Recently, although advances were made on modeling multivariate count data, existing models really has several limitations: (i) The multivariate Poisson log-normal model (Aitchison and Ho, 1989) cannot be used to fit multivariate count data with excess zero-vectors; (ii) The multivariate zero-inflated Poisson (ZIP) distribution (Li et al., 1999) cannot be used to model zero-truncated/deflated count data and it is difficult to apply to high-dimensional cases; (iii) The Type I multivariate zero-adjusted Poisson (ZAP) distribution (Tian et al., 2017) could only model multivariate count data with a special correlation structure for random components that are all positive or negative. In this paper, we first introduce a new multivariate ZAP distribution, based on a multivariate Poisson distribution, which allows the correlations between components with a more flexible dependency structure, that is some of the correlation coefficients could be positive while others could be negative. We then develop its important distributional properties, and provide efficient statistical inference methods for multivariate ZAP model with or without covariates. Two real data examples in biomedicine are used to illustrate the proposed methods.
Collapse
|
33
|
Test procedure and sample size determination for a proportion study using a double-sampling scheme with two fallible classifiers. Stat Methods Med Res 2017; 28:1019-1043. [PMID: 29233082 DOI: 10.1177/0962280217744239] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Double sampling is usually applied to collect necessary information for situations in which an infallible classifier is available for validating a subset of the sample that has already been classified by a fallible classifier. Inference procedures have previously been developed based on the partially validated data obtained by the double-sampling process. However, it could happen in practice that such infallible classifier or gold standard does not exist. In this article, we consider the case in which both classifiers are fallible and propose asymptotic and approximate unconditional test procedures based on six test statistics for a population proportion and five approximate sample size formulas based on the recommended test procedures under two models. Our results suggest that both asymptotic and approximate unconditional procedures based on the score statistic perform satisfactorily for small to large sample sizes and are highly recommended. When sample size is moderate or large, asymptotic procedures based on the Wald statistic with the variance being estimated under the null hypothesis, likelihood rate statistic, log- and logit-transformation statistics based on both models generally perform well and are hence recommended. The approximate unconditional procedures based on the log-transformation statistic under Model I, Wald statistic with the variance being estimated under the null hypothesis, log- and logit-transformation statistics under Model II are recommended when sample size is small. In general, sample size formulae based on the Wald statistic with the variance being estimated under the null hypothesis, likelihood rate statistic and score statistic are recommended in practical applications. The applicability of the proposed methods is illustrated by a real-data example.
Collapse
|
34
|
|
35
|
A profile likelihood approach for longitudinal data analysis. Biometrics 2017; 74:220-228. [DOI: 10.1111/biom.12712] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2016] [Revised: 04/01/2017] [Accepted: 04/01/2017] [Indexed: 11/30/2022]
|
36
|
Poisson–Poisson item count techniques for surveys with sensitive discrete quantitative data. Stat Pap (Berl) 2017. [DOI: 10.1007/s00362-017-0895-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
37
|
Confidence intervals for an ordinal effect size measure based on partially validated series. Comput Stat Data Anal 2016. [DOI: 10.1016/j.csda.2016.05.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
38
|
Confidence intervals for proportion difference from two independent partially validated series. Stat Methods Med Res 2016; 25:2250-2273. [DOI: 10.1177/0962280213519718] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Partially validated series are common when a gold-standard test is too expensive to be applied to all subjects, and hence a fallible device is used accordingly to measure the presence of a characteristic of interest. In this article, confidence interval construction for proportion difference between two independent partially validated series is studied. Ten confidence intervals based on the method of variance estimates recovery (MOVER) are proposed, with each using the confidence limits for the two independent binomial proportions obtained by the asymptotic, Logit-transformation, Agresti–Coull and Bayesian methods. The performances of the proposed confidence intervals and three likelihood-based intervals available in the literature are compared with respect to the empirical coverage probability, confidence width and ratio of mesial non-coverage to non-coverage probability. Our empirical results show that (1) all confidence intervals exhibit good performance in large samples; (2) confidence intervals based on MOVER combining the confidence limits for binomial proportions based on Wilson, Agresti–Coull, Logit-transformation, Bayesian (with three priors) methods perform satisfactorily from small to large samples, and hence can be recommended for practical applications. Two real data sets are analysed to illustrate the proposed methods.
Collapse
|
39
|
Confidence interval construction for the Youden index based on partially validated series. Comput Stat Data Anal 2015. [DOI: 10.1016/j.csda.2014.11.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
40
|
A Localized Implementation of the Iterative Proportional Scaling Procedure for Gaussian Graphical Models. J Comput Graph Stat 2015. [DOI: 10.1080/10618600.2014.900499] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
41
|
|
42
|
Confidence interval construction for the difference between two correlated proportions with missing observations. J Biopharm Stat 2015; 26:323-38. [DOI: 10.1080/10543406.2014.1000544] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
43
|
Testing non-inferiority of a new treatment in three-arm clinical trials with binary endpoints. BMC Med Res Methodol 2014; 14:134. [PMID: 25524326 PMCID: PMC4277823 DOI: 10.1186/1471-2288-14-134] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2014] [Accepted: 12/12/2014] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND A two-arm non-inferiority trial without a placebo is usually adopted to demonstrate that an experimental treatment is not worse than a reference treatment by a small pre-specified non-inferiority margin due to ethical concerns. Selection of the non-inferiority margin and establishment of assay sensitivity are two major issues in the design, analysis and interpretation for two-arm non-inferiority trials. Alternatively, a three-arm non-inferiority clinical trial including a placebo is usually conducted to assess the assay sensitivity and internal validity of a trial. Recently, some large-sample approaches have been developed to assess the non-inferiority of a new treatment based on the three-arm trial design. However, these methods behave badly with small sample sizes in the three arms. This manuscript aims to develop some reliable small-sample methods to test three-arm non-inferiority. METHODS Saddlepoint approximation, exact and approximate unconditional, and bootstrap-resampling methods are developed to calculate p-values of the Wald-type, score and likelihood ratio tests. Simulation studies are conducted to evaluate their performance in terms of type I error rate and power. RESULTS Our empirical results show that the saddlepoint approximation method generally behaves better than the asymptotic method based on the Wald-type test statistic. For small sample sizes, approximate unconditional and bootstrap-resampling methods based on the score test statistic perform better in the sense that their corresponding type I error rates are generally closer to the prespecified nominal level than those of other test procedures. CONCLUSIONS Both approximate unconditional and bootstrap-resampling test procedures based on the score test statistic are generally recommended for three-arm non-inferiority trials with binary outcomes.
Collapse
|
44
|
Poisson and negative binomial item count techniques for surveys with sensitive question. Stat Methods Med Res 2014; 26:931-947. [DOI: 10.1177/0962280214563345] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Although the item count technique is useful in surveys with sensitive questions, privacy of those respondents who possess the sensitive characteristic of interest may not be well protected due to a defect in its original design. In this article, we propose two new survey designs (namely the Poisson item count technique and negative binomial item count technique) which replace several independent Bernoulli random variables required by the original item count technique with a single Poisson or negative binomial random variable, respectively. The proposed models not only provide closed form variance estimate and confidence interval within [0, 1] for the sensitive proportion, but also simplify the survey design of the original item count technique. Most importantly, the new designs do not leak respondents’ privacy. Empirical results show that the proposed techniques perform satisfactorily in the sense that it yields accurate parameter estimate and confidence interval.
Collapse
|
45
|
Confidence-interval construction for rate ratio in matched-pair studies with incomplete data. J Biopharm Stat 2014; 24:546-68. [PMID: 24697611 DOI: 10.1080/10543406.2014.888438] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Matched-pair design is often used in clinical trials to increase the efficiency of establishing equivalence between two treatments with binary outcomes. In this article, we consider such a design based on rate ratio in the presence of incomplete data. The rate ratio is one of the most frequently used indices in comparing efficiency of two treatments in clinical trials. In this article, we propose 10 confidence-interval estimators for the rate ratio in incomplete matched-pair designs. A hybrid method that recovers variance estimates required for the rate ratio from the confidence limits for single proportions is proposed. It is noteworthy that confidence intervals based on this hybrid method have closed-form solution. The performance of the proposed confidence intervals is evaluated with respect to their exact coverage probability, expected confidence interval width, and distal and mesial noncoverage probability. The results show that the hybrid Agresti-Coull confidence interval based on Fieller's theorem performs satisfactorily for small to moderate sample sizes. Two real examples from clinical trials are used to illustrate the proposed confidence intervals.
Collapse
|
46
|
Testing homogeneity of proportion ratios for stratified correlated bilateral data in two-arm randomized clinical trials. Stat Med 2014; 33:4370-86. [DOI: 10.1002/sim.6244] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2013] [Revised: 04/23/2014] [Accepted: 05/26/2014] [Indexed: 11/06/2022]
|
47
|
Decellularized kidney scaffold-mediated renal regeneration. Biomaterials 2014; 35:6822-8. [PMID: 24855960 DOI: 10.1016/j.biomaterials.2014.04.074] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2014] [Accepted: 04/22/2014] [Indexed: 01/04/2023]
Abstract
Renal regeneration approaches offer great potential for the treatment of chronic kidney disease, but their availability remains limited by the clinical challenges they pose. In the present study, we used continuous detergent perfusion to generate decellularized (DC) rat kidney scaffolds. The scaffolds retained intact vascular trees and overall architecture, along with significant concentrations of various cytokines, but lost all cellular components. To evaluate its potential in renal function recovery, DC scaffold tissue was grafted onto partially nephrectomized rat kidneys. An increase of renal size was found, and regenerated renal parenchyma cells were observed in the repair area containing the grafted scaffold. In addition, the number of nestin-positive renal progenitor cells was markedly higher in scaffold-grafted kidneys compared to controls. Moreover, radionuclide scan analysis showed significant recovery of renal functions at 6 weeks post-implantation. Our results provide further evidence to show that DC kidney scaffolds could be used to promote renal recovery in the treatment of chronic kidney disease.
Collapse
|
48
|
Abstract
Collecting representative data on sensitive issues has long been problematic and challenging in public health prevalence investigation (e.g. non-suicidal self-injury), medical research (e.g. drug habits), social issue studies (e.g. history of child abuse), and their interdisciplinary studies (e.g. premarital sexual intercourse). Alternative data collection techniques that can be adopted to study sensitive questions validly become more important and necessary. As an alternative to the famous Warner randomized response model, non-randomized response triangular model has recently been developed to encourage participants to provide truthful responses in surveys involving sensitive questions. Unfortunately, both randomized and non-randomized response models could underestimate the proportion of subjects with the sensitive characteristic as some respondents do not believe that these techniques can protect their anonymity. As a result, some authors hypothesized that lack of trust and noncompliance should be highest among those who have the most to lose and the least to use for the anonymity provided by using these techniques. Some researchers noticed the existence of noncompliance and proposed new models to measure noncompliance in order to get reliable information. However, all proposed methods were based on randomized response models which require randomizing devices, restrict the survey to only face-to-face interview and are lack of reproductivity. Taking the noncompliance into consideration, we introduce new non-randomized response techniques in which no covariate is required. Asymptotic properties of the proposed estimates for sensitive characteristic as well as noncompliance probabilities are developed. Our proposed techniques are empirically shown to yield accurate estimates for both sensitive and noncompliance probabilities. A real example about premarital sex among university students is used to demonstrate our methodologies.
Collapse
|
49
|
Flexible non-randomized response models for survey with sensitive question. Stat Med 2014; 33:918-29. [DOI: 10.1002/sim.5999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2012] [Revised: 07/15/2013] [Accepted: 09/03/2013] [Indexed: 11/11/2022]
|
50
|
|