1
|
Elahi C, Adil SM, Sakita F, Mmbaga BT, Rocha TAH, Fuller A, Haglund MM, Vissoci JRN, Staton C. Corticosteroid Randomization after Significant Head Injury and International Mission for Prognosis and Clinical Trialsin Traumatic Brain Injury Models Compared with a Machine Learning-Based Predictive Model from Tanzania. J Neurotrauma 2021; 39:151-158. [PMID: 33980030 DOI: 10.1089/neu.2020.7483] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Hospitals in low- and middle-income countries (LMICs) could benefit from decision support technologies to reduce time to triage, diagnosis, and surgery for patients with traumatic brain injury (TBI). Corticosteroid Randomization after Significant Head Injury (CRASH) and International Mission for Prognosis and Clinical Trials in Traumatic Brain Injury (IMPACT) are robust examples of TBI prognostic models, although they have yet to be validated in Sub-Saharan Africa (SSA). Moreover, machine learning and improved data quality in LMICs provide an opportunity to develop context-specific, and potentially more accurate, prognostic models. We aim to externally validate CRASH and IMPACT on our TBI registry and compare their performances to that of the locally derived model (from the Kilimanjaro Christian Medical Center [KCMC]). We developed a machine learning-based prognostic model from a TBI registry collected at a regional referral hospital in Moshi, Tanzania. We also used the core CRASH and IMPACT online risk calculators to generate risk scores for each patient. We compared the discrimination (area under the curve [AUC]) and calibration before and after Platt scaling (Brier, Hosmer-Lemeshow Test, and calibration plots) for CRASH, IMPACT, and the KCMC model. The outcome of interest was unfavorable in-hospital outcome defined as a Glasgow Outcome Scale score of 1-3. There were 2972 patients included in the TBI registry, of whom 11% had an unfavorable outcome. The AUCs for the KCMC model, CRASH, and IMPACT were 0.919, 0.876, and 0.821, respectively. Prior to Platt scaling, CRASH was the best calibrated model (χ2 = 68.1) followed by IMPACT (χ2 = 380.9) and KCMC (χ2 = 1025.6). We provide the first SSA validation of the core CRASH and IMPACT models. The KCMC model had better discrimination than either of these. CRASH had the best calibration, although all model predictions could be successfully calibrated. The top performing models, KCMC and CRASH, were both developed using LMIC data, suggesting that locally derived models may outperform imported ones from different contexts of care. Further work is needed to externally validate the KCMC model.
Collapse
Affiliation(s)
- Cyrus Elahi
- Barrow Neurological Institute, Phoenix, Arizona, USA.,Division of Neurosurgery and Neurology, Department of Neurosurgery, and Department of Surgery, Duke University Medical Center, Durham, North Carolina, USA
| | - Syed M Adil
- Division of Neurosurgery and Neurology, Department of Neurosurgery, and Department of Surgery, Duke University Medical Center, Durham, North Carolina, USA
| | - Francis Sakita
- Emergency Department, and Kilimanjaro Christian Medical Center, Moshi, Tanzania
| | - Blandina T Mmbaga
- Kilimanjaro Christian Medical University College, Moshi, Tanzania.,Kilimanjaro Clinical Research Institute, Kilimanjaro Christian Medical Center, Moshi, Tanzania
| | | | - Anthony Fuller
- Division of Neurosurgery and Neurology, Department of Neurosurgery, and Department of Surgery, Duke University Medical Center, Durham, North Carolina, USA.,Duke Global Health Institute, Duke University, Durham, North Carolina, USA
| | - Michael M Haglund
- Division of Neurosurgery and Neurology, Department of Neurosurgery, and Department of Surgery, Duke University Medical Center, Durham, North Carolina, USA.,Duke Global Health Institute, Duke University, Durham, North Carolina, USA
| | - João Ricardo Nickenig Vissoci
- Duke Global Health Institute, Duke University, Durham, North Carolina, USA.,Division of Emergency Medicine, Department of Surgery, Duke University Medical Center, Durham, North Carolina, USA
| | - Catherine Staton
- Duke Global Health Institute, Duke University, Durham, North Carolina, USA.,Division of Emergency Medicine, Department of Surgery, Duke University Medical Center, Durham, North Carolina, USA
| |
Collapse
|
3
|
Abstract
Introduction To identify phenotypes of type 1 diabetes based on glucose curves from continuous glucose-monitoring (CGM) using functional data (FD) analysis to account for longitudinal glucose patterns. We present a reliable prediction model that can accurately predict glycemic levels based on past data collected from the CGM sensor and real-time risk of hypo-/hyperglycemic for individuals with type 1 diabetes. Methods A longitudinal cohort study of 443 type 1 diabetes patients with CGM data from a completed trial. The FD analysis approach, sparse functional principal components (FPCs) analysis was used to identify phenotypes of type 1 diabetes glycemic variation. We employed a nonstationary stochastic linear mixed-effects model (LME) that accommodates between-patient and within-patient heterogeneity to predict glycemic levels and real-time risk of hypo-/hyperglycemic by creating specific target functions for these excursions. Results The majority of the variation (73%) in glucose trajectories was explained by the first two FPCs. Higher order variation in the CGM profiles occurred during weeknights, although variation was higher on weekends. The model has low prediction errors and yields accurate predictions for both glucose levels and real-time risk of glycemic excursions. Conclusions By identifying these distinct longitudinal patterns as phenotypes, interventions can be targeted to optimize type 1 diabetes management for subgroups at the highest risk for compromised long-term outcomes such as cardiac disease or stroke. Further, the estimated change/variability in an individual's glucose trajectory can be used to establish clinically meaningful and patient-specific thresholds that, when coupled with probabilistic predictive inference, provide a useful medical-monitoring tool.
Collapse
|
4
|
Bernert RA, Hilberg AM, Melia R, Kim JP, Shah NH, Abnousi F. Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:E5929. [PMID: 32824149 PMCID: PMC7460360 DOI: 10.3390/ijerph17165929] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 07/28/2020] [Indexed: 12/12/2022]
Abstract
Suicide is a leading cause of death that defies prediction and challenges prevention efforts worldwide. Artificial intelligence (AI) and machine learning (ML) have emerged as a means of investigating large datasets to enhance risk detection. A systematic review of ML investigations evaluating suicidal behaviors was conducted using PubMed/MEDLINE, PsychInfo, Web-of-Science, and EMBASE, employing search strings and MeSH terms relevant to suicide and AI. Databases were supplemented by hand-search techniques and Google Scholar. Inclusion criteria: (1) journal article, available in English, (2) original investigation, (3) employment of AI/ML, (4) evaluation of a suicide risk outcome. N = 594 records were identified based on abstract search, and 25 hand-searched reports. N = 461 reports remained after duplicates were removed, n = 316 were excluded after abstract screening. Of n = 149 full-text articles assessed for eligibility, n = 87 were included for quantitative synthesis, grouped according to suicide behavior outcome. Reports varied widely in methodology and outcomes. Results suggest high levels of risk classification accuracy (>90%) and Area Under the Curve (AUC) in the prediction of suicidal behaviors. We report key findings and central limitations in the use of AI/ML frameworks to guide additional research, which hold the potential to impact suicide on broad scale.
Collapse
Affiliation(s)
- Rebecca A. Bernert
- Stanford Suicide Prevention Research Laboratory, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94304, USA
| | - Amanda M. Hilberg
- Stanford Suicide Prevention Research Laboratory, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94304, USA
| | - Ruth Melia
- Stanford Suicide Prevention Research Laboratory, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94304, USA
- Department of Psychology, National University of Ireland, Galway, Ireland
| | - Jane Paik Kim
- Stanford Suicide Prevention Research Laboratory, Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94304, USA
| | - Nigam H. Shah
- Department of Medicine, Center for Biomedical Informatics Research, Stanford University School of Medicine, Stanford, CA 94304, USA
- Informatics, Stanford Center for Clinical and Translational Research, and Education (Spectrum), Stanford University, Stanford CA 94304, USA
| | - Freddy Abnousi
- Facebook, Menlo Park, CA 94025, USA
- Yale University School of Medicine, New Haven, CT 06510, USA
| |
Collapse
|