1
|
Jones RD, Peng C, Odom L, Moody H, Eswaran H. Use of Cellular-Enabled Glucometer for Diabetes Management in High-Risk Pregnancy. TELEMEDICINE REPORTS 2023; 4:307-316. [PMID: 37908627 PMCID: PMC10615046 DOI: 10.1089/tmr.2023.0033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 08/03/2023] [Indexed: 11/02/2023]
Abstract
Background Type 1 and type 2 diabetes during pregnancy requires intensive glucose monitoring to ensure optimal health outcomes for mothers and infants. Standard practice includes patients monitoring their glucose four to six times a day using a standard glucometer and paper diary. Remote patient monitoring (RPM) offers an alternative method for diabetes management. This study aimed at measuring the patient's satisfaction with and feasibility of using a cellular-enabled RPM device for glucose management in pregnancies complicated by type 1 or type 2 diabetes. Methods In a mixed-methods pilot study, 59 pregnant women with type 1 or type 2 diabetes were given a cellular-enabled iGlucose glucometer. Participants completed a pre-survey, used the device for 30 days, and then completed a post-survey and semi-structured interview. Results Participants were divided into two groups based on duration of device use: high-use >50 days and low-use ≤50 days. A significant difference (p < 0.0001) in Appraisal of Diabetes scores was seen between the pre- and post-survey for both groups, which indicates that the use of iGlucose glucometer significantly improved participants' appraisal of their diabetes. There was a significant difference (p = 0.0409) in pre-post General Life Satisfaction in the high-use group, which indicates that iGlucose glucometer significantly improved participants' life satisfaction when used for an extended amount of time. Participants scored high on system usability for all groups and reported positive associations with iGlucose use. Conclusion The use of cellular-enabled RPM glucometers is a valuable tool for the management of type 1 diabetes mellitus and type 2 diabetes mellitus during pregnancy.
Collapse
Affiliation(s)
- Rebecca D. Jones
- University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Cheng Peng
- University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Lettie Odom
- University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Heather Moody
- University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Hari Eswaran
- University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| |
Collapse
|
2
|
Chen Y, Li P, Rao JNK, Wu C. Pseudo empirical likelihood inference for nonprobability survey samples. CAN J STAT 2022. [DOI: 10.1002/cjs.11708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Affiliation(s)
- Yilin Chen
- Child Health Evaluative Sciences, The Hospital for Sick Children Toronto Ontario Canada
| | - Pengfei Li
- Department of Statistics and Actuarial Science University of Waterloo Waterloo Ontario Canada
| | - J. N. K. Rao
- School of Mathematics and Statistics Carleton University Ottawa Ontario Canada
| | - Changbao Wu
- Department of Statistics and Actuarial Science University of Waterloo Waterloo Ontario Canada
| |
Collapse
|
3
|
Efficient edit rule implication for nominal and ordinal data. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2021.12.114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
4
|
D'Alberto R, Raggi M. How much reliable are the integrated ‘live’ data? A validation strategy proposal for the non-parametric micro statistical matching. J Appl Stat 2021; 48:322-348. [DOI: 10.1080/02664763.2020.1724272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Riccardo D'Alberto
- Department of Statistical Sciences ‘P. Fortunati’, University of Bologna, Bologna (BO), Italy
| | - Meri Raggi
- Department of Statistical Sciences ‘P. Fortunati’, University of Bologna, Bologna (BO), Italy
| |
Collapse
|
5
|
Nuño MM, Gillen DL. On estimation in the nested case‐control design under nonproportional hazards. Scand Stat Theory Appl 2021. [DOI: 10.1111/sjos.12510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
- Michelle M. Nuño
- Department of Preventive Medicine University of Southern California Los Angeles California USA
- Childrens Oncology Group Monrovia California USA
| | - Daniel L. Gillen
- Department of Statistics University of California, Irvine Irvine California USA
| |
Collapse
|
6
|
Kim J, Tam S. Data Integration by Combining Big Data and Survey Sample Data for Finite Population Inference. Int Stat Rev 2020. [DOI: 10.1111/insr.12434] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Jae‐Kwang Kim
- Department of Statistics Iowa State University Ames Iowa USA
| | - Siu‐Ming Tam
- Methodology Division, Australian Bureau of Statistics, Canberra and School of Mathematics and Statistics University of Wollongong Wollongong New South Wales Australia
| |
Collapse
|
7
|
Nuño MM, Gillen DL. Robust estimation in the nested case-control design under a misspecified covariate functional form. Stat Med 2020; 40:299-311. [PMID: 33105514 DOI: 10.1002/sim.8775] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2019] [Revised: 09/24/2020] [Accepted: 09/26/2020] [Indexed: 11/07/2022]
Abstract
The Cox proportional hazards model is typically used to analyze time-to-event data. If the event of interest is rare and covariates are difficult or expensive to collect, the nested case-control (NCC) design provides consistent estimates at reduced costs with minimal impact on precision if the model is specified correctly. If our scientific goal is to conduct inference regarding an association of interest, it is essential that we specify the model a priori to avoid multiple testing bias. We cannot, however, be certain that all assumptions will be satisfied so it is important to consider robustness of the NCC design under model misspecification. In this manuscript, we show that in finite sample settings where the functional form of a covariate of interest is misspecified, the estimates resulting from the partial likelihood estimator under the NCC design depend on the number of controls sampled at each event time. To account for this dependency, we propose an estimator that recovers the results obtained using using the full cohort, where full covariate information is available for all study participants. We present the utility of our estimator using simulation studies and show the theoretical properties. We end by applying our estimator to motivating data from the Alzheimer's Disease Neuroimaging Initiative.
Collapse
Affiliation(s)
- Michelle M Nuño
- Department of Preventive Medicine, University of Southern California, Los Angeles, California, USA.,Children's Oncology Group in Monrovia, CA
| | - Daniel L Gillen
- Department of Statistics, University of California Irvine, Irvine, California, USA
| |
Collapse
|
8
|
Visengeriyeva L, Abedjan Z. Anatomy of Metadata for Data Curation. ACM JOURNAL OF DATA AND INFORMATION QUALITY 2020. [DOI: 10.1145/3371925] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Real-world datasets often suffer from various data quality problems. Several data cleaning solutions have been proposed so far. However, data cleaning remains a manual and iterative task that requires domain and technical expertise. Exploiting metadata promises to improve the tedious process of data preparation, because data errors are detectable through metadata. This article investigates the intrinsic connection between metadata and data errors. In this work, we establish a mapping that reflects the connection between data quality issues and extractable metadata using qualitative and quantitative techniques. Additionally, we present a taxonomy based on a closed grammar that covers all existing metadata and allows the composition of novel types of metadata. We provide a case-study to show the practical application of the grammar for generating new metadata for data quality assessment.
Collapse
|
9
|
Ambagna JJ, Dury S, Dop MC. Estimating trends in prevalence of undernourishment: advantages of using HCES over the FAO approach in a case study from Cameroon. Food Secur 2019. [DOI: 10.1007/s12571-018-00884-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
10
|
Manrique-Vallier D, Reiter JP. Bayesian Simultaneous Edit and Imputation for Multivariate Categorical Data. J Am Stat Assoc 2018. [DOI: 10.1080/01621459.2016.1231612] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
11
|
Oberski DL, Kirchner A, Eckman S, Kreuter F. Evaluating the Quality of Survey and Administrative Data with Generalized Multitrait-Multimethod Models. J Am Stat Assoc 2018. [DOI: 10.1080/01621459.2017.1302338] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- D. L. Oberski
- Department of Methodology & Statistics, Utrecht University, Utrecht, The Netherlands
| | - A. Kirchner
- Survey Research Division, RTI International, NC
- University of Nebraska, Lincoln, NE
| | - S. Eckman
- Survey Research Division, RTI International, NC
| | - F. Kreuter
- Statistical Methods Group at the Institute for Employment Research, Nürnberg, Germany
- School of Social Science, University of Mannheim, Mannheim, Germany
- Joint Program in Survey Methodology, University of Maryland, College Park, MD
| |
Collapse
|
12
|
Wang J, Tang N. Dependable Data Repairing with Fixing Rules. ACM JOURNAL OF DATA AND INFORMATION QUALITY 2017. [DOI: 10.1145/3041761] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
One of the main challenges that data-cleaning systems face is to
automatically
identify and repair data errors in a
dependable
manner. Though data dependencies (also known as integrity constraints) have been widely studied to capture errors in data, automated and dependable data repairing on these errors has remained a notoriously difficult problem. In this work, we introduce an automated approach for dependably repairing data errors, based on a novel class of
fixing rules
. A fixing rule contains an evidence pattern, a set of negative patterns, and a fact value. The heart of fixing rules is
deterministic
: given a tuple, the evidence pattern and the negative patterns of a fixing rule are combined to precisely capture which attribute is wrong, and the fact indicates how to correct this error. We study several fundamental problems associated with fixing rules and establish their complexity. We develop efficient algorithms to check whether a set of fixing rules are consistent and discuss approaches to resolve inconsistent fixing rules. We also devise efficient algorithms for repairing data errors using fixing rules. Moreover, we discuss approaches on how to generate a large number of fixing rules from examples or available knowledge bases. We experimentally demonstrate that our techniques outperform other automated algorithms in terms of the accuracy of repairing data errors, using both real-life and synthetic data.
Collapse
Affiliation(s)
- Jiannan Wang
- School of Computing Science, Simon Fraser University, Burnaby, Canada
| | - Nan Tang
- Qatar Computing Research Institute, HBKU, Qatar
| |
Collapse
|
13
|
Kim HJ, Reiter JP, Karr AF. Simultaneous edit-imputation and disclosure limitation for business establishment data. J Appl Stat 2016. [DOI: 10.1080/02664763.2016.1267123] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Hang J. Kim
- Department of Mathematical Sciences, University of Cincinnati, Cincinnati, OH, USA
| | - Jerome P. Reiter
- Department of Statistical Science, Duke University, Durham, NC, USA
| | - Alan F. Karr
- RTI International, Research Triangle Park, NC, USA
| |
Collapse
|
14
|
Abstract
Enterprise's archives are inevitably affected by the presence of data quality problems (also called glitches). This article proposes the application of a new method to analyze the quality of datasets stored in the tables of a database, with no knowledge of the semantics of the data and without the need to define repositories of rules. The proposed method is based on proper revisions of different approaches for outlier detection that are combined to boost overall performance and accuracy. A novel transformation algorithm is conceived that treats the items in database tables as data points in real coordinate space of
n
dimensions, so that fields containing dates and fields containing text are processed to calculate distances between those data points. The implementation of an iterative approach ensures that global and local outliers are discovered even if they are subject, primarily in datasets with multiple outliers or clusters of outliers, to masking and swamping effects. The application of the method to a set of archives, some of which have been studied extensively in the literature, provides very promising experimental results and outperforms the application of a single other technique. Finally, a list of future research directions is highlighted.
Collapse
|
15
|
Kim HJ, Cox LH, Karr AF, Reiter JP, Wang Q. Simultaneous Edit-Imputation for Continuous Microdata. J Am Stat Assoc 2015. [DOI: 10.1080/01621459.2015.1040881] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
16
|
Affiliation(s)
- Marco Puts
- is statistical researcher at Statistics Netherlands, with a focus on Big Data processing and methodology. His special interests lie in the field of artificial intelligence and data filtering
| | - Piet Daas
- is senior methodologist and project leader for Big Data research at Statistics Netherlands. His main fields of expertise are the statistical analysis and methodology of Big Data with specific attention to selectivity and quality
| | - Ton de Waal
- is senior methodologist at Statistics Netherlands and professor in data-integration at Tilburg University. His main fields of expertise are statistical data editing, imputation and data-integration
| |
Collapse
|
17
|
|
18
|
Li X, Liu J, Duan N, Jiang H, Girgis R, Lieberman J. Cumulative sojourn time in longitudinal studies: a sequential imputation method to handle missing health state data due to dropout. Stat Med 2014; 33:2030-47. [PMID: 24918241 DOI: 10.1002/sim.6090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Missing data are ubiquitous in longitudinal studies. In this paper, we propose an imputation procedure to handle dropouts in longitudinal studies. By taking advantage of the monotone missing pattern resulting from dropouts, our imputation procedure can be carried out sequentially, which substantially reduces the computation complexity. In addition, at each step of the sequential imputation, we set up a model selection mechanism that chooses between a parametric model and a nonparametric model to impute eachmissing observation. Unlike usual model selection procedures that aim at finding a single model fitting the entire data set well, our model selection procedure is customized to find a suitable model for the prediction of each missing observation.
Collapse
|
19
|
Pannekoek J, Shlomo N, De Waal T. Calibrated imputation of numerical data under linear edit restrictions. Ann Appl Stat 2013. [DOI: 10.1214/13-aoas664] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
20
|
|
21
|
Abstract
Hot deck imputation is a method for handling missing data in which each missing value is replaced with an observed response from a "similar" unit. Despite being used extensively in practice, the theory is not as well developed as that of other imputation methods. We have found that no consensus exists as to the best way to apply the hot deck and obtain inferences from the completed data set. Here we review different forms of the hot deck and existing research on its statistical properties. We describe applications of the hot deck currently in use, including the U.S. Census Bureau's hot deck for the Current Population Survey (CPS). We also provide an extended example of variations of the hot deck applied to the third National Health and Nutrition Examination Survey (NHANES III). Some potential areas for future research are highlighted.
Collapse
Affiliation(s)
- Rebecca R Andridge
- Division of Biostatistics, The Ohio State University, Columbus, OH 43210, USA
| | | |
Collapse
|
22
|
|
23
|
López C. Improvements over the duplicate performance method for outlier detection in categorical multivariate surveys. STAT METHOD APPL-GER 1996. [DOI: 10.1007/bf02589173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
24
|
Microdata, Macrodata and Metadata. Comput Stat 1992. [DOI: 10.1007/978-3-642-48678-4_41] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
25
|
|
26
|
Marinez YN, McMahan CA, Barnwell GM, Wigodsky HS. Ensuring data quality in medical research through an integrated data management system. Stat Med 1984; 3:101-11. [PMID: 6463448 DOI: 10.1002/sim.4780030204] [Citation(s) in RCA: 22] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Abstract
An effective data management system ensures high quality research data by making certain of the proper execution of the study design. This paper presents the components of a data management system and describes procedures for use in each component of the system to obtain high quality data. We discuss the interrelationship among the components of the data management system and the relationship of the data management system to other parts of the research project. We identify underlying principles in design and implementation of a data management system to ensure high quality data.
Collapse
|
27
|
|