Chatham AH, Bradley ED, Troiani V, Beiler DL, Christy P, Schirle L, Sanchez-Roige S, Samuels DC, Jeffery AD. Automating the Addiction Behaviors Checklist for Problematic Opioid Use Identification.
JAMA Psychiatry 2025:2832298. [PMID:
40202749 PMCID:
PMC11983290 DOI:
10.1001/jamapsychiatry.2025.0424]
[Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/11/2024] [Accepted: 01/21/2025] [Indexed: 04/10/2025]
Abstract
Importance
Individuals whose chronic pain is managed with opioids are at high risk of developing an opioid use disorder. Electronic health records (EHR) allow large-scale studies to identify a continuum of problematic opioid use, including opioid use disorder. Traditionally, this is done through diagnostic codes, which are often unreliable and underused.
Objective
To determine whether regular expressions, an interpretable natural language processing technique, could automate a validated clinical tool (Addiction Behaviors Checklist) to identify problematic opioid use.
Design, Setting, and Participants
This cross-sectional study reports on a retrospective cohort with data analyzed from 2021 through 2023. The approach was evaluated against a blinded, manually reviewed holdout test set and validated against an independent test set at a separate institution. The study used data from Vanderbilt University Medical Center's Synthetic Derivative, a deidentified version of the EHR for research purposes. This cohort comprised 8063 individuals with chronic pain, defined by diagnostic codes on at least 2 days. The study team collected free-text notes, demographics, and diagnostic codes and performed an external validation with 100 individuals with chronic pain from Geisinger, recruited from an interventional pain clinic cohort.
Main Outcomes and Measures
The primary outcome was the evaluation of the automated method in identifying patients demonstrating problematic opioid use and its comparison with manual medical record review and opioid use disorder diagnostic codes. Methods with F1 scores were evaluated (a single value that combines sensitivity and positive predictive value at a single threshold) and areas under the curve (a single value that combines sensitivity and specificity across multiple thresholds).
Results
Among the 8063 patients in the primary site (5081 female [63%] and 2982 male [37%]; mean [SD] age, 56 [16] years) and 100 patients in the validation site (57 female [57%] and 43 male [43%]; mean [SD] age, 54 [13] years), the automated approach outperformed diagnostic codes based on F1 scores (0.73; 95% CI, 0.62-0.83 vs 0.08; 95% CI, 0.00-0.19 at the primary site and 0.70; 95% CI, 0.50-0.85 vs 0.29; 95% CI, 0.07-0.50 at the validation site) and areas under the curve (0.82; 95% CI, 0.73-0.89 vs 0.52; 95% CI, 0.50-0.55 at the primary site and 0.86; 95% CI, 0.76-0.94 vs 0.59;95% CI, 0.50-0.67 at validation site).
Conclusions
This automated data extraction technique may facilitate earlier identification of people at risk for and who are experiencing problematic opioid use, and create new opportunities for studying long-term sequelae of opioid pain management.
Collapse