1
|
Stein JD, Zhou Y, Andrews CA, Kim JE, Addis V, Bixler J, Grove N, McMillan B, Munir SZ, Pershing S, Schultz JS, Stagg BC, Wang SY, Woreta F. Using Natural Language Processing to Identify Different Lens Pathology in Electronic Health Records. Am J Ophthalmol 2024; 262:153-160. [PMID: 38296152 PMCID: PMC11098689 DOI: 10.1016/j.ajo.2024.01.030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 01/21/2024] [Accepted: 01/22/2024] [Indexed: 05/18/2024]
Abstract
PURPOSE Nearly all published ophthalmology-related Big Data studies rely exclusively on International Classification of Diseases (ICD) billing codes to identify patients with particular ocular conditions. However, inaccurate or nonspecific codes may be used. We assessed whether natural language processing (NLP), as an alternative approach, could more accurately identify lens pathology. DESIGN Database study comparing the accuracy of NLP versus ICD billing codes to properly identify lens pathology. METHODS We developed an NLP algorithm capable of searching free-text lens exam data in the electronic health record (EHR) to identify the type(s) of cataract present, cataract density, presence of intraocular lenses, and other lens pathology. We applied our algorithm to 17.5 million lens exam records in the Sight Outcomes Research Collaborative (SOURCE) repository. We selected 4314 unique lens-exam entries and asked 11 clinicians to assess whether all pathology present in the entries had been correctly identified in the NLP algorithm output. The algorithm's sensitivity at accurately identifying lens pathology was compared with that of the ICD codes. RESULTS The NLP algorithm correctly identified all lens pathology present in 4104 of the 4314 lens-exam entries (95.1%). For less common lens pathology, algorithm findings were corroborated by reviewing clinicians for 100% of mentions of pseudoexfoliation material and 99.7% for phimosis, subluxation, and synechia. Sensitivity at identifying lens pathology was better for NLP (0.98 [0.96-0.99] than for billing codes (0.49 [0.46-0.53]). CONCLUSIONS Our NLP algorithm identifies and classifies lens abnormalities routinely documented by eye-care professionals with high accuracy. Such algorithms will help researchers to properly identify and classify ocular pathology, broadening the scope of feasible research using real-world data.
Collapse
Affiliation(s)
- Joshua D Stein
- From the W.K. Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan, USA (J.D.S., Y.Z., C.A.A., J.B.); Department of Health Management and Policy, University of Michigan School of Public Health, Ann Arbor, Michigan, USA (J.D.S.).
| | - Yunshu Zhou
- From the W.K. Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan, USA (J.D.S., Y.Z., C.A.A., J.B.)
| | - Chris A Andrews
- From the W.K. Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan, USA (J.D.S., Y.Z., C.A.A., J.B.)
| | - Judy E Kim
- Department of Ophthalmology and Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, USA (J.E.K.)
| | - Victoria Addis
- Department of Ophthalmology, University of Pennsylvania, Philadelphia, Pennsylvania, USA (V.A.)
| | - Jill Bixler
- From the W.K. Kellogg Eye Center, Department of Ophthalmology and Visual Sciences, University of Michigan, Ann Arbor, Michigan, USA (J.D.S., Y.Z., C.A.A., J.B.)
| | - Nathan Grove
- Department of Ophthalmology, University of Colorado School of Medicine, Aurora, Colorado, USA (N.G.)
| | - Brian McMillan
- Department of Ophthalmology and Visual Sciences, West Virginia University, Morgantown, West Virginia, USA (B.M.)
| | - Saleha Z Munir
- Department of Ophthalmology and Visual Sciences, University of Maryland School of Medicine, Baltimore, Maryland, USA (S.Z.M.)
| | - Suzann Pershing
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University, Stanford, California, USA (S.P., S.Y.W.); VA Palo Alto Health Care System, Palo Alto, California, USA (S.P.)
| | - Jeffrey S Schultz
- Department of Ophthalmology, Montefiore Medical Center, New York, New York, USA (J.S.S.)
| | - Brian C Stagg
- Department of Ophthalmology, University of Utah, Salt Lake City, Utah, USA (B.C.S.)
| | - Sophia Y Wang
- Byers Eye Institute at Stanford, Department of Ophthalmology, Stanford University, Stanford, California, USA (S.P., S.Y.W.)
| | - Fasika Woreta
- Department of Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland, USA (F.W.)
| |
Collapse
|
2
|
Hwang TS, Thomas M, Hribar M, Chen A, White E. The Impact of Documentation Workflow on the Accuracy of the Coded Diagnoses in the Electronic Health Record. OPHTHALMOLOGY SCIENCE 2024; 4:100409. [PMID: 38054107 PMCID: PMC10694743 DOI: 10.1016/j.xops.2023.100409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 09/15/2023] [Accepted: 09/29/2023] [Indexed: 12/07/2023]
Abstract
Objective To determine the impact of documentation workflow on the accuracy of coded diagnoses in electronic health records (EHRs). Design Cross-sectional study. Participants All patients who completed visits at the Casey Eye Institute Retina Division faculty clinic between April 7, 2022 and April 13, 2022. Main Outcome Measures Agreement between coded diagnoses and clinical notes. Methods We assessed the rate of agreement between the diagnoses in the clinical notes and the coded diagnosis in the EHR using manual review and examined the impact of the documentation workflow on the rate of agreement in an academic retina practice. Results In 202 visits by 8 physicians, 78% (range, 22%-100%) had an agreement between the coded diagnoses and the clinical notes. When physicians integrated the diagnosis code entry and note composition, the rate of agreement was 87.9% (range, 62%-100%). For those who entered the diagnosis codes separately from writing notes, the agreement was 44.4% (22%-50%, P < 0.0001). Conclusion The visit-specific agreement between the coded diagnosis and the progress note can vary widely by workflow. The workflow and EHR design may be an important part of understanding and improving the quality of EHR data. Financial Disclosures Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Collapse
Affiliation(s)
- Thomas S. Hwang
- Casey Eye Institute, Oregon Health and Science University, Portland, OR
| | - Merina Thomas
- Casey Eye Institute, Oregon Health and Science University, Portland, OR
| | - Michelle Hribar
- Casey Eye Institute, Oregon Health and Science University, Portland, OR
- Department of Medical Informatics and Clinical Epidemiology, Oregon Health and Science University, Portland, OR
| | - Aiyin Chen
- Casey Eye Institute, Oregon Health and Science University, Portland, OR
| | - Elizabeth White
- Casey Eye Institute, Oregon Health and Science University, Portland, OR
| |
Collapse
|