1
|
Jervis-Rademeyer H, Gautam S, Cornell S, Khan J, Wilanowski D, Musselman KE, Noonan VK, Wolfe DL, Baldini R, Kennedy S, Ho C. Development of a functional electrical stimulation cycling toolkit for spinal cord injury rehabilitation in acute care hospitals: A participatory action approach. PLoS One 2025; 20:e0316296. [PMID: 39928663 PMCID: PMC11809891 DOI: 10.1371/journal.pone.0316296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Accepted: 12/09/2024] [Indexed: 02/12/2025] Open
Abstract
The purpose of our study was to develop a toolkit to facilitate the implementation of functional electrical stimulation (FES) cycling for persons with a newly acquired spinal cord injury (SCI) in the acute care inpatient hospital setting. The researchers and community members used participatory action as a research approach to co-create the toolkit. We held two focus groups to develop drafts, with a third meeting to provide feedback, and a fourth meeting to evaluate the toolkit and determine dissemination strategies. Toolkit development followed the Planning, Action, Reflection, Evaluation cycle. We used an iterative design informed by focus group and toolkit consultant (SC) feedback. In focus group discussions, we included FES cycling champions (JK, DW) who led acute care implementation. Focus group members, recruited through purposive sampling, had to 1) have an understanding about FES cycling in acute care for SCI and 2) represent one of these groups: individual living with SCI, social support, hospital manager, clinician, therapist, researcher, and/or acute care FES cycling champion. Twelve individuals took part in four focus groups to develop a toolkit designed to facilitate implementation of FES cycling in SCI acute care in Edmonton, Alberta. Group members included an individual with lived experience, three acute-care occupational or physical therapists, three acute-care hospital managers, and five researchers. Two physical therapists also identified as clinical FES cycling champions. Following an inductive content analysis, we identified four main themes: 1) Health care provider toolkit content and categories, 2) Health care provider toolkit end product, 3) Collaborations between groups and institutions and 4) Infrastructure. Interested parties who utilize FES cycling in acute care for SCI rehabilitation agree that toolkits should target the appropriate group, be acute care setting-specific, and provide information for a smooth transition in care.
Collapse
Affiliation(s)
| | - Srijana Gautam
- Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Stephanie Cornell
- Parkwood Institute, St. Joseph’s Health Care London, London, Ontario, Canada
- Lawson Health Research Institute, London, Ontario, Canada
| | - Janelle Khan
- Royal Alexandra Hospital, Alberta Health Services, Edmonton, Alberta, Canada
| | - Danielle Wilanowski
- University of Alberta Hospital, Alberta Health Services, Edmonton, Alberta, Canada
| | - Kristin E. Musselman
- Department of Physical Therapy and Rehabilitation Sciences Institute, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, Toronto, Ontario, Canada
| | | | - Dalton L. Wolfe
- Parkwood Institute, St. Joseph’s Health Care London, London, Ontario, Canada
- Lawson Health Research Institute, London, Ontario, Canada
- University of Western Ontario, London, Ontario, Canada
| | | | - Steven Kennedy
- Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
| | - Chester Ho
- Department of Medicine, University of Alberta, Edmonton, Alberta, Canada
- Glenrose Hospital, Edmonton, Alberta, Canada
| |
Collapse
|
2
|
Levy DA, Jordan HS, Lalor JP, Smirnova JK, Hu W, Liu W, Yu H. Individual Factors That Affect Laypeople's Understanding of Definitions of Medical Jargon. HEALTH POLICY AND TECHNOLOGY 2024; 13:100932. [PMID: 39650577 PMCID: PMC11618823 DOI: 10.1016/j.hlpt.2024.100932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2024]
Abstract
Objective Patients have difficulty understanding medical jargon in electronic health record (EHR) notes. Lay definitions can improve patient comprehension, which is the goal of the NoteAid project. We assess whether the NoteAid definitions are understandable to laypeople and whether understandability differs with respect to layperson characteristics. Methods Definitions for jargon terms were written for laypersons with a 4th-to-7th-grade reading level. 300 definitions were randomly sampled from a corpus of approximately 30,000 definitions. 280 laypeople (crowdsource workers) were recruited; each layperson rated the understandability of 20 definitions. Understandability was rated on a 5-point scale. Using a generalized estimating equation model (GEE) we analyzed the relationship between understandability and age, sex, race/ethnicity, education level, native language, health literacy, and definition writer. Results Overall, 81.1% (95% CI: 76.5-85.7%) of the laypeople reported that the definitions were understandable. Males were less likely to report understanding the definitions than females (OR: 0.73, 95% CI: 0.63-0.84). Asians, Hispanics, and those who marked their race/ethnicity as "other" were more likely to report understanding the definitions than whites (Asians: OR: 1.43, 95% CI: 1.17-1.73; Hispanics: OR: 1.86, 95% CI: 1.33-2.59; Other: OR: 2.48, 95% CI: 1.65-3.74). Laypeople whose native language was not English were less likely to report understanding the definitions (OR: 0.51, 95% CI: 0.36-0.74). Laypeople with lower health literacy were less likely to report understanding definitions (health literacy score 3: OR: 0.51, 95% CI: 0.43-0.62; health literacy score 4: OR: 0.40, 95% CI: 0.29-0.55). Conclusion Understandability of definitions among laypeople was high. There were statistically significant race/ethnic differences in self-reported understandability, even after controlling for multiple demographics.
Collapse
Affiliation(s)
- David A. Levy
- Department of Computer Science, University of Massachusetts Lowell, Lowell, MA
| | - Harmon S. Jordan
- Assistant Professor, Tufts University School of Medicine, Boston, MA
| | - John P. Lalor
- Department of IT, Analytics, and Operations, Mendoza College of Business, University of Notre Dame, Notre Dame, IN
| | | | - Wen Hu
- Center for Biomedical and Health Research in Data Sciences, University of Massachusetts Lowell, Lowell, MA
| | - Weisong Liu
- Center for Biomedical and Health Research in Data Sciences, University of Massachusetts Lowell, Lowell, MA
| | - Hong Yu
- Center for Biomedical and Health Research in Data Sciences, University of Massachusetts Lowell, Lowell, MA
- Manning College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst, MA
- Center for Healthcare Organization & Implementation Research, Veterans Affairs Bedford Healthcare System, Bedford, MA
- Miner School of Computer and Information Sciences, University of Massachusetts Lowell, Lowell, MA
| |
Collapse
|
3
|
Lalor JP, Levy DA, Jordan HS, Hu W, Smirnova JK, Yu H. Evaluating Expert-Layperson Agreement in Identifying Jargon Terms in Electronic Health Record Notes: Observational Study. J Med Internet Res 2024; 26:e49704. [PMID: 39405109 PMCID: PMC11522659 DOI: 10.2196/49704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 03/13/2024] [Accepted: 08/20/2024] [Indexed: 11/01/2024] Open
Abstract
BACKGROUND Studies have shown that patients have difficulty understanding medical jargon in electronic health record (EHR) notes, particularly patients with low health literacy. In creating the NoteAid dictionary of medical jargon for patients, a panel of medical experts selected terms they perceived as needing definitions for patients. OBJECTIVE This study aims to determine whether experts and laypeople agree on what constitutes medical jargon. METHODS Using an observational study design, we compared the ability of medical experts and laypeople to identify medical jargon in EHR notes. The laypeople were recruited from Amazon Mechanical Turk. Participants were shown 20 sentences from EHR notes, which contained 325 potential jargon terms as identified by the medical experts. We collected demographic information about the laypeople's age, sex, race or ethnicity, education, native language, and health literacy. Health literacy was measured with the Single Item Literacy Screener. Our evaluation metrics were the proportion of terms rated as jargon, sensitivity, specificity, Fleiss κ for agreement among medical experts and among laypeople, and the Kendall rank correlation statistic between the medical experts and laypeople. We performed subgroup analyses by layperson characteristics. We fit a beta regression model with a logit link to examine the association between layperson characteristics and whether a term was classified as jargon. RESULTS The average proportion of terms identified as jargon by the medical experts was 59% (1150/1950, 95% CI 56.1%-61.8%), and the average proportion of terms identified as jargon by the laypeople overall was 25.6% (22,480/87,750, 95% CI 25%-26.2%). There was good agreement among medical experts (Fleiss κ=0.781, 95% CI 0.753-0.809) and fair agreement among laypeople (Fleiss κ=0.590, 95% CI 0.589-0.591). The beta regression model had a pseudo-R2 of 0.071, indicating that demographic characteristics explained very little of the variability in the proportion of terms identified as jargon by laypeople. Using laypeople's identification of jargon as the gold standard, the medical experts had high sensitivity (91.7%, 95% CI 90.1%-93.3%) and specificity (88.2%, 95% CI 86%-90.5%) in identifying jargon terms. CONCLUSIONS To ensure coverage of possible jargon terms, the medical experts were loose in selecting terms for inclusion. Fair agreement among laypersons shows that this is needed, as there is a variety of opinions among laypersons about what is considered jargon. We showed that medical experts could accurately identify jargon terms for annotation that would be useful for laypeople.
Collapse
Affiliation(s)
- John P Lalor
- Department of Information Technology, Analytics, and Operations, Mendoza College of Business, University of Notre Dame, Notre Dame, IN, United States
| | - David A Levy
- Center for Biomedical and Health Research in Data Sciences, Miner School of Computer and Information Sciences, University of Massachusetts Lowell, Lowell, MA, United States
| | - Harmon S Jordan
- Tufts University School of Medicine, Boston, MA, United States
| | - Wen Hu
- Center for Biomedical and Health Research in Data Sciences, Miner School of Computer and Information Sciences, University of Massachusetts Lowell, Lowell, MA, United States
- Center for Healthcare Organization & Implementation Research, Veterans Affairs Bedford Healthcare System, Bedford, MA, United States
| | | | - Hong Yu
- Center for Biomedical and Health Research in Data Sciences, Miner School of Computer and Information Sciences, University of Massachusetts Lowell, Lowell, MA, United States
- Center for Healthcare Organization & Implementation Research, Veterans Affairs Bedford Healthcare System, Bedford, MA, United States
- Manning College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst, MA, United States
| |
Collapse
|
4
|
Deeb M, Gangadhar A, Rabindranath M, Rao K, Brudno M, Sidhu A, Wang B, Bhat M. The emerging role of generative artificial intelligence in transplant medicine. Am J Transplant 2024; 24:1724-1730. [PMID: 38901561 DOI: 10.1016/j.ajt.2024.06.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Revised: 05/26/2024] [Accepted: 06/11/2024] [Indexed: 06/22/2024]
Abstract
Generative artificial intelligence (AI), a subset of machine learning that creates new content based on training data, has witnessed tremendous advances in recent years. Practical applications have been identified in health care in general, and there is significant opportunity in transplant medicine for generative AI to simplify tasks in research, medical education, and clinical practice. In addition, patients stand to benefit from patient education that is more readily provided by generative AI applications. This review aims to catalyze the development and adoption of generative AI in transplantation by introducing basic AI and generative AI concepts to the transplant clinician and summarizing its current and potential applications within the field. We provide an overview of applications to the clinician, researcher, educator, and patient. We also highlight the challenges involved in bringing these applications to the bedside and need for ongoing refinement of generative AI applications to sustainably augment the transplantation field.
Collapse
Affiliation(s)
- Maya Deeb
- Ajmera Transplant Program, University Health Network Toronto, Ontario, Canada; Division of Gastroenterology and Hepatology, Department of Medicine, University of Toronto, Toronto, Ontario, Canada
| | - Anirudh Gangadhar
- Ajmera Transplant Program, University Health Network Toronto, Ontario, Canada
| | | | - Khyathi Rao
- Ajmera Transplant Program, University Health Network Toronto, Ontario, Canada
| | - Michael Brudno
- DATA Team, University Health Network, Toronto, Ontario, Canada
| | - Aman Sidhu
- Ajmera Transplant Program, University Health Network Toronto, Ontario, Canada
| | - Bo Wang
- DATA Team, University Health Network, Toronto, Ontario, Canada
| | - Mamatha Bhat
- Ajmera Transplant Program, University Health Network Toronto, Ontario, Canada; Division of Gastroenterology and Hepatology, Department of Medicine, University of Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
5
|
Perkins SW, Muste JC, Alam TA, Singh RP. Improving Clinical Documentation with Artificial Intelligence: A Systematic Review. PERSPECTIVES IN HEALTH INFORMATION MANAGEMENT 2024; 21:1g. [PMID: 40134897 PMCID: PMC11605376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 03/27/2025]
Abstract
Clinicians dedicate significant time to clinical documentation, incurring opportunity cost. Artificial Intelligence (AI) tools promise to improve documentation quality and efficiency. This systematic review overviews peer-reviewed AI tools to understand how AI may reduce opportunity cost. PubMed, Embase, Scopus, and Web of Science databases were queried for original, English language research studies published during or before July 2024 that report a new development, application, and validation of an AI tool for improving clinical documentation. 129 studies were extracted from 673 candidate studies. AI tools improve documentation by structuring data, annotating notes, evaluating quality, identifying trends, and detecting errors. Other AI-enabled tools assist clinicians in real-time during office visits, but moderate accuracy precludes broad implementation. While a highly accurate end-to-end AI documentation assistant is not currently reported in peer-reviewed literature, existing techniques such as structuring data offer targeted improvements to clinical documentation workflows.
Collapse
|
6
|
Perkins SW, Muste JC, Alam T, Singh RP. Improving Clinical Documentation with Artificial Intelligence: A Systematic Review. PERSPECTIVES IN HEALTH INFORMATION MANAGEMENT 2024; 21:1d. [PMID: 40134899 PMCID: PMC11605373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 03/27/2025]
Abstract
Clinicians dedicate significant time to clinical documentation, incurring opportunity cost. Artificial Intelligence (AI) tools promise to improve documentation quality and efficiency. This systematic review overviews peer-reviewed AI tools to understand how AI may reduce opportunity cost. PubMed, Embase, Scopus, and Web of Science databases were queried for original, English language research studies published during or before July 2024 that report a new development, application, and validation of an AI tool for improving clinical documentation. 129 studies were extracted from 673 candidate studies. AI tools improve documentation by structuring data, annotating notes, evaluating quality, identifying trends, and detecting errors. Other AI-enabled tools assist clinicians in real-time during office visits, but moderate accuracy precludes broad implementation. While a highly accurate end-to-end AI documentation assistant is not currently reported in peer-reviewed literature, existing techniques such as structuring data offer targeted improvements to clinical documentation workflows.
Collapse
|
7
|
Nacht CL, Jacobson N, Shiyanbola O, Smith CA, Hoonakker PL, Coller RJ, Dean SM, Sklansky DJ, Smith W, Sprackling CM, Kelly MM. Perception of Physicians' Notes Among Parents of Different Health Literacy Levels. Hosp Pediatr 2024; 14:108-115. [PMID: 38173406 PMCID: PMC10823185 DOI: 10.1542/hpeds.2023-007240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
OBJECTIVES To explore the benefits and challenges of accessing physicians' notes during pediatric hospitalization across parents of different health literacy levels. METHODS For this secondary analysis, we used semi-structured interviews conducted with 28 parents on their impressions of having access to their child's care team notes on a bedside table. Three researchers used thematic analysis to develop a codebook, coded interview data, and identified themes. Parent interviews and respective themes were then dichotomized into proficient or limited health literacy groups and compared. RESULTS Nine themes were identified in this secondary analysis: 6 benefits and 3 challenges. All parents identified more benefits than challenges, including that the notes served as a recap of information and memory aid and increased autonomy, empowerment, and advocacy for their child. Both groups disliked receiving bad news in notes before face-to-face communication. Parents with proficient literacy reported that notes allowed them to check information accuracy, but that notes may not be as beneficial for parents with lower health literacy. Parents with limited literacy uniquely identified limited comprehension of medical terms but indicated that notes facilitated their understanding of their child's condition, increased their appreciation for their health care team, and decreased their anxiety, stress, and worry. CONCLUSIONS Parents with limited health literacy uniquely reported that notes improved their understanding of their child's care and decreased (rather than increased) worry. Reducing medical terminology may be one equitable way to increase note accessibility for parents across the health literacy spectrum.
Collapse
Affiliation(s)
- Carrie L. Nacht
- School of Public Health, San Diego State University, San Diego, California
| | - Nora Jacobson
- Institute for Clinical and Translational Research and School of Nursing
| | | | | | - Peter L.T. Hoonakker
- Wisconsin Institute for Health Systems Engineering, University of Wisconsin-Madison, Madison, Wisconsin
| | - Ryan J. Coller
- Department of Pediatrics, University of WisconsinSchool of Medicine and Public Health, Madison, Wisconsin
| | | | - Daniel J. Sklansky
- Department of Pediatrics, University of WisconsinSchool of Medicine and Public Health, Madison, Wisconsin
| | | | - Carley M. Sprackling
- Department of Pediatrics, University of WisconsinSchool of Medicine and Public Health, Madison, Wisconsin
| | - Michelle M. Kelly
- Department of Pediatrics, University of WisconsinSchool of Medicine and Public Health, Madison, Wisconsin
| |
Collapse
|
8
|
Tomlin HR, Wissing M, Tanikella S, Kaur P, Tabas L. Challenges and Opportunities for Professional Medical Publications Writers to Contribute to Plain Language Summaries (PLS) in an AI/ML Environment - A Consumer Health Informatics Systematic Review. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2024; 2023:709-717. [PMID: 38222388 PMCID: PMC10785924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 01/16/2024]
Abstract
Professional medical publications writers (PMWs) cover a wide range of biomedical writing activities that recently includes translation of biomedical publications to plain language summaries (PLS). The consumer health informatics literature (CHI) consistently describes the importance of incorporating health literacy principles in any natural language processing (NLP) app designed to communicate medical information to lay audiences, particularly patients. In this stepwise systematic review, we searched PubMed indexed literature for CHI NLP-based apps that have the potential to assist PMWs in developing text based PLS. Results showed that available apps are limited to patient portals and other technologies used to communicate medical text and reports from electronic health records. PMWs can apply the lessons learned from CHI NLP-based apps to supervise development of tools specific to text simplification and summarization for PLS from biomedical publications.
Collapse
Affiliation(s)
- Holly R Tomlin
- Certara Synchrogenix, Wilmington, DE, USA
- Consumer Health Informatics Lab (CHIL), Section of Biostatistics and Data Sciences, Yale School of Medicine, New Haven, CT
- Weill Cornell Medicine, Department of Population Health Sciences, Division of Health Analytics, New York, NY
| | | | | | | | | |
Collapse
|
9
|
Lalor JP, Wu H, Mazor KM, Yu H. Evaluating the efficacy of NoteAid on EHR note comprehension among US Veterans through Amazon Mechanical Turk. Int J Med Inform 2023; 172:105006. [PMID: 36780789 PMCID: PMC9992155 DOI: 10.1016/j.ijmedinf.2023.105006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 12/23/2022] [Accepted: 01/23/2023] [Indexed: 02/11/2023]
Abstract
OBJECTIVE Low health literacy is a concern among US Veterans. In this study, we evaluated NoteAid, a system that provides lay definitions to medical jargon terms in EHR notes to help Veterans comprehend EHR notes. We expected that low initial scores for Veterans would be improved by using NoteAid. MATERIALS AND METHODS We recruited Veterans from the Amazon Mechanical Turk crowd work platform (MTurk). We also recruited non-Veterans from MTurk as a control group for comparison. We randomly split recruited MTurk Veteran participants into control and intervention groups. We recruited non-Veteran participants into mutually exclusive control or intervention tasks on the MTurk platform. We showed participants de-identified EHR notes and asked them to answer comprehension questions related to the notes. We provided participants in the intervention group with EHR note content processed with NoteAid, while NoteAid was not available for participants in the control group. RESULTS We recruited 94 Veterans and 181 non-Veterans. NoteAid leads to a significant improvement for non-Veterans but not for Veterans. Comparing Veterans recruited via MTurk with non-Veterans recruited via MTurk, we found that without NoteAid, Veterans have significantly higher raw scores than non-Veterans. This difference is not significant with NoteAid. DISCUSSION That Veterans outperform a comparable population of non-Veterans is a surprising outcome. Without NoteAid, scores on the test are already high for Veterans, therefore, minimizing the ability of an intervention such as NoteAid to improve performance. With regards to Veterans, understanding the health literacy of Veterans has been an open question. We show here that Veterans score higher than a comparable, non-Veteran population. CONCLUSION Veterans on MTurk do not see improved scores when using NoteAid, but they already score high on the test, significantly higher than non-Veterans. When evaluating NoteAid, population specifics need to be considered, as performance may vary across groups. Future work investigating the effectiveness of NoteAid on improving comprehension with local Veterans and developing a more difficult test to assess groups with higher health literacy is needed.
Collapse
Affiliation(s)
- John P Lalor
- Department of Information Technology, Analytics, and Operations, Mendoza College of Business, University of Notre Dame, Notre Dame, IN, US
| | - Hao Wu
- Department of Psychology and Human Development, Peabody College, Vanderbilt University, Nashville, TN, US
| | - Kathleen M Mazor
- Department of Medicine, University of Massachusetts Medical School, Worcester, MA, US; Meyers Primary Care Institute, University of Massachusetts Medical School/Reliant Medical Group/Fallon Health, Worcester, MA, US
| | - Hong Yu
- Department of Computer Science, University of Massachusetts Lowell, Lowell, MA, US; College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst, MA, US; Center for Healthcare Organization and Implementation Research, Bedford Veterans Affairs Medical Center, Bedford, MA, US.
| |
Collapse
|
10
|
Keloth VK, Zhou S, Lindemann L, Zheng L, Elhanan G, Einstein AJ, Geller J, Perl Y. Mining of EHR for interface terminology concepts for annotating EHRs of COVID patients. BMC Med Inform Decis Mak 2023; 23:40. [PMID: 36829139 PMCID: PMC9951157 DOI: 10.1186/s12911-023-02136-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Accepted: 02/09/2023] [Indexed: 02/26/2023] Open
Abstract
BACKGROUND Two years into the COVID-19 pandemic and with more than five million deaths worldwide, the healthcare establishment continues to struggle with every new wave of the pandemic resulting from a new coronavirus variant. Research has demonstrated that there are variations in the symptoms, and even in the order of symptom presentations, in COVID-19 patients infected by different SARS-CoV-2 variants (e.g., Alpha and Omicron). Textual data in the form of admission notes and physician notes in the Electronic Health Records (EHRs) is rich in information regarding the symptoms and their orders of presentation. Unstructured EHR data is often underutilized in research due to the lack of annotations that enable automatic extraction of useful information from the available extensive volumes of textual data. METHODS We present the design of a COVID Interface Terminology (CIT), not just a generic COVID-19 terminology, but one serving a specific purpose of enabling automatic annotation of EHRs of COVID-19 patients. CIT was constructed by integrating existing COVID-related ontologies and mining additional fine granularity concepts from clinical notes. The iterative mining approach utilized the techniques of 'anchoring' and 'concatenation' to identify potential fine granularity concepts to be added to the CIT. We also tested the generalizability of our approach on a hold-out dataset and compared the annotation coverage to the coverage obtained for the dataset used to build the CIT. RESULTS Our experiments demonstrate that this approach results in higher annotation coverage compared to existing ontologies such as SNOMED CT and Coronavirus Infectious Disease Ontology (CIDO). The final version of CIT achieved about 20% more coverage than SNOMED CT and 50% more coverage than CIDO. In the future, the concepts mined and added into CIT could be used as training data for machine learning models for mining even more concepts into CIT and further increasing the annotation coverage. CONCLUSION In this paper, we demonstrated the construction of a COVID interface terminology that can be utilized for automatically annotating EHRs of COVID-19 patients. The techniques presented can identify frequently documented fine granularity concepts that are missing in other ontologies thereby increasing the annotation coverage.
Collapse
Affiliation(s)
- Vipina K Keloth
- School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, USA.
| | - Shuxin Zhou
- Department of Computer Science, New Jersey Institute of Technology, Newark, NJ, USA
| | - Luke Lindemann
- School of Medicine and Health Sciences, The George Washington University, Washington (D.C.), USA
| | - Ling Zheng
- Computer Science and Software Engineering Department, Monmouth University, West Long Branch, NJ, USA
| | - Gai Elhanan
- Renown Institute for Health Innovation, Desert Research Institute, Reno, NV, USA
| | - Andrew J Einstein
- Cardiology Division, Department of Medicine, Columbia University Irving Medical Center, New York, NY, USA
- Department of Radiology, Columbia University Irving Medical Center, New York, NY, USA
| | - James Geller
- Department of Computer Science, New Jersey Institute of Technology, Newark, NJ, USA
| | - Yehoshua Perl
- Department of Computer Science, New Jersey Institute of Technology, Newark, NJ, USA
| |
Collapse
|
11
|
Li X, Zhang Y, Jin J, Sun F, Li N, Liang S. A model of integrating convolution and BiGRU dual-channel mechanism for Chinese medical text classifications. PLoS One 2023; 18:e0282824. [PMID: 36928266 PMCID: PMC10019650 DOI: 10.1371/journal.pone.0282824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 02/23/2023] [Indexed: 03/18/2023] Open
Abstract
Recently, a lot of Chinese patients consult treatment plans through social networking platforms, but the Chinese medical text contains rich information, including a large number of medical nomenclatures and symptom descriptions. How to build an intelligence model to automatically classify the text information consulted by patients and recommend the correct department for patients is very important. In order to address the problem of insufficient feature extraction from Chinese medical text and low accuracy, this paper proposes a dual channel Chinese medical text classification model. The model extracts feature of Chinese medical text at different granularity, comprehensively and accurately obtains effective feature information, and finally recommends departments for patients according to text classification. One channel of the model focuses on medical nomenclatures, symptoms and other words related to hospital departments, gives different weights, calculates corresponding feature vectors with convolution kernels of different sizes, and then obtains local text representation. The other channel uses the BiGRU network and attention mechanism to obtain text representation, highlighting the important information of the whole sentence, that is, global text representation. Finally, the model uses full connection layer to combine the representation vectors of the two channels, and uses Softmax classifier for classification. The experimental results show that the accuracy, recall and F1-score of the model are improved by 10.65%, 8.94% and 11.62% respectively compared with the baseline models in average, which proves that our model has better performance and robustness.
Collapse
Affiliation(s)
- Xiaoli Li
- School of Software, Henan University, Kaifeng, China
| | - Yuying Zhang
- School of Software, Henan University, Kaifeng, China
| | - Jiangyong Jin
- School of Software, Henan University, Kaifeng, China
| | - Fuqi Sun
- School of Software, Henan University, Kaifeng, China
| | - Na Li
- School of Digital Arts and Communication, Shandong University of Art & Design, Jinan, China
| | - Shengbin Liang
- School of Software, Henan University, Kaifeng, China
- Institute for Data Engineering and Science, University of Saint Joseph, Macao, China
- * E-mail:
| |
Collapse
|
12
|
Combi C, Amico B, Bellazzi R, Holzinger A, Moore JH, Zitnik M, Holmes JH. A manifesto on explainability for artificial intelligence in medicine. Artif Intell Med 2022; 133:102423. [PMID: 36328669 DOI: 10.1016/j.artmed.2022.102423] [Citation(s) in RCA: 43] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 10/04/2022] [Accepted: 10/04/2022] [Indexed: 12/13/2022]
Abstract
The rapid increase of interest in, and use of, artificial intelligence (AI) in computer applications has raised a parallel concern about its ability (or lack thereof) to provide understandable, or explainable, output to users. This concern is especially legitimate in biomedical contexts, where patient safety is of paramount importance. This position paper brings together seven researchers working in the field with different roles and perspectives, to explore in depth the concept of explainable AI, or XAI, offering a functional definition and conceptual framework or model that can be used when considering XAI. This is followed by a series of desiderata for attaining explainability in AI, each of which touches upon a key domain in biomedicine.
Collapse
Affiliation(s)
| | | | | | | | - Jason H Moore
- Cedars-Sinai Medical Center, West Hollywood, CA, USA
| | - Marinka Zitnik
- Harvard Medical School and Broad Institute of MIT & Harvard, MA, USA
| | - John H Holmes
- University of Pennsylvania Perelman School of Medicine Philadelphia, PA, USA
| |
Collapse
|
13
|
Kujala S, Hörhammer I, Väyrynen A, Holmroos M, Nättiaho-Rönnholm M, Hägglund M, Johansen MA. Patients' Experiences of Web-Based Access to Electronic Health Records in Finland: Cross-sectional Survey. J Med Internet Res 2022; 24:e37438. [PMID: 35666563 PMCID: PMC9210208 DOI: 10.2196/37438] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/11/2022] [Accepted: 05/06/2022] [Indexed: 02/06/2023] Open
Abstract
Background Patient portals that provide access to electronic health records offer a means for patients to better understand and self-manage their health. Yet, patient access to electronic health records raises many concerns among physicians, and little is known about the use practices and experiences of patients who access their electronic health records via a mature patient portal that has been available for citizens for over five years. Objective We aimed to identify patients’ experiences using a national patient portal to access their electronic health records. In particular, we focused on understanding usability-related perceptions and the benefits and challenges of reading clinical notes written by health care professionals. Methods Data were collected from 3135 patient users of the Finnish My Kanta patient portal through a web-based survey in June 2021 (response rate: 0.7%). Patients received an invitation to complete the questionnaire when they logged out of the patient portal. Respondents were asked to rate the usability of the patient portal, and the ratings were used to calculate approximations of the System Usability Scale score. Patients were also asked about the usefulness of features, and whether they had discussed the notes with health professionals. Open-ended questions were used to ask patients about their experiences of the benefits and challenges related to reading health professionals’ notes. Results Overall, patient evaluations of My Kanta were positive, and its usability was rated as good (System Usability Scale score approximation: mean 72.7, SD 15.9). Patients found the portal to be the most useful for managing prescriptions and viewing the results of examinations and medical notes. Viewing notes was the most frequent reason (978/3135, 31.2%) for visiting the portal. Benefits of reading the notes mentioned by patients included remembering and understanding what was said by health professionals and the instructions given during an appointment, the convenience of receiving information about health and care, the capability to check the accuracy of notes, and using the information to support self-management. However, there were challenges related to difficulty in understanding medical terminology, incorrect or inadequate notes, missing notes, and usability. Conclusions Patients actively used medical notes to receive information to follow professionals' instructions to take care of their health, and patient access to electronic health records can support self-management. However, for the benefits to be realized, improvements in the quality and availability of medical professionals’ notes are necessary. Providing a standard information structure could help patients find the information they need. Furthermore, linking notes to vocabularies and other information sources could also improve the understandability of medical terminology; patient agency could be supported by allowing them to add comments to their notes, and patient trust of the system could be improved by allowing them to control the visibility of the professionals’ notes.
Collapse
Affiliation(s)
- Sari Kujala
- Department of Computer Science, Aalto University, Espoo, Finland
| | - Iiris Hörhammer
- Department of Industrial Engineering and Management, Aalto University, Espoo, Finland
| | - Akseli Väyrynen
- Department of Computer Science, Aalto University, Espoo, Finland
| | - Mari Holmroos
- Kela, The Social Insurance Institution of Finland, Helsinki, Finland
| | | | - Maria Hägglund
- Healthcare Sciences and e-Health, Department of Women's and Children's Health, Uppsala University, Uppsala, Sweden
| | - Monika Alise Johansen
- Norwegian Centre for E-health Research, University Hospital of North Norway, Tromsø, Norway
| |
Collapse
|
14
|
Yilmaz Y, Jurado Nunez A, Ariaeinejad A, Lee M, Sherbino J, Chan TM. Harnessing Natural Language Processing to Support Decisions Around Workplace-Based Assessment: Machine Learning Study of Competency-Based Medical Education. JMIR MEDICAL EDUCATION 2022; 8:e30537. [PMID: 35622398 PMCID: PMC9187970 DOI: 10.2196/30537] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2021] [Revised: 12/05/2021] [Accepted: 04/30/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Residents receive a numeric performance rating (eg, 1-7 scoring scale) along with a narrative (ie, qualitative) feedback based on their performance in each workplace-based assessment (WBA). Aggregated qualitative data from WBA can be overwhelming to process and fairly adjudicate as part of a global decision about learner competence. Current approaches with qualitative data require a human rater to maintain attention and appropriately weigh various data inputs within the constraints of working memory before rendering a global judgment of performance. OBJECTIVE This study explores natural language processing (NLP) and machine learning (ML) applications for identifying trainees at risk using a large WBA narrative comment data set associated with numerical ratings. METHODS NLP was performed retrospectively on a complete data set of narrative comments (ie, text-based feedback to residents based on their performance on a task) derived from WBAs completed by faculty members from multiple hospitals associated with a single, large, residency program at McMaster University, Canada. Narrative comments were vectorized to quantitative ratings using the bag-of-n-grams technique with 3 input types: unigram, bigrams, and trigrams. Supervised ML models using linear regression were trained with the quantitative ratings, performed binary classification, and output a prediction of whether a resident fell into the category of at risk or not at risk. Sensitivity, specificity, and accuracy metrics are reported. RESULTS The database comprised 7199 unique direct observation assessments, containing both narrative comments and a rating between 3 and 7 in imbalanced distribution (scores 3-5: 726 ratings; and scores 6-7: 4871 ratings). A total of 141 unique raters from 5 different hospitals and 45 unique residents participated over the course of 5 academic years. When comparing the 3 different input types for diagnosing if a trainee would be rated low (ie, 1-5) or high (ie, 6 or 7), our accuracy for trigrams was 87%, bigrams 86%, and unigrams 82%. We also found that all 3 input types had better prediction accuracy when using a bimodal cut (eg, lower or higher) compared with predicting performance along the full 7-point rating scale (50%-52%). CONCLUSIONS The ML models can accurately identify underperforming residents via narrative comments provided for WBAs. The words generated in WBAs can be a worthy data set to augment human decisions for educators tasked with processing large volumes of narrative assessments.
Collapse
Affiliation(s)
- Yusuf Yilmaz
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Department of Medical Education, Ege University, Izmir, Turkey
- Program for Faculty Development, Office of Continuing Professional Development, McMaster University, Hamilton, ON, Canada
- Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Alma Jurado Nunez
- Department of Medicine and Masters in eHealth Program, McMaster University, Hamilton, ON, Canada
| | - Ali Ariaeinejad
- Department of Medicine and Masters in eHealth Program, McMaster University, Hamilton, ON, Canada
| | - Mark Lee
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Jonathan Sherbino
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Education and Innovation, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| | - Teresa M Chan
- McMaster Education Research, Innovation, and Theory Program, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Program for Faculty Development, Office of Continuing Professional Development, McMaster University, Hamilton, ON, Canada
- Division of Emergency Medicine, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
- Division of Education and Innovation, Department of Medicine, Faculty of Health Sciences, McMaster University, Hamilton, ON, Canada
| |
Collapse
|
15
|
van Mens HJ, Martens SS, Paiman EH, Mertens AC, Nienhuis R, de Keizer NF, Cornet R. Diagnosis clarification by generalization to patient-friendly terms and definitions: Validation study. J Biomed Inform 2022; 129:104071. [DOI: 10.1016/j.jbi.2022.104071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 03/12/2022] [Accepted: 04/05/2022] [Indexed: 11/16/2022]
|
16
|
Karim Jabali A, Waris A, Israr Khan D, Ahmed S, Hourani RJ. Electronic health records: Three decades of bibliometric research productivity analysis and some insights. INFORMATICS IN MEDICINE UNLOCKED 2022. [DOI: 10.1016/j.imu.2022.100872] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
|
17
|
Lalor JP, Hu W, Tran M, Wu H, Mazor KM, Yu H. Evaluating the Effectiveness of NoteAid in a Community Hospital Setting: Randomized Trial of Electronic Health Record Note Comprehension Interventions With Patients. J Med Internet Res 2021; 23:e26354. [PMID: 33983124 PMCID: PMC8160802 DOI: 10.2196/26354] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 01/07/2021] [Accepted: 04/06/2021] [Indexed: 12/03/2022] Open
Abstract
BACKGROUND Interventions to define medical jargon have been shown to improve electronic health record (EHR) note comprehension among crowdsourced participants on Amazon Mechanical Turk (AMT). However, AMT participants may not be representative of the general population or patients who are most at-risk for low health literacy. OBJECTIVE In this work, we assessed the efficacy of an intervention (NoteAid) for EHR note comprehension among participants in a community hospital setting. METHODS Participants were recruited from Lowell General Hospital (LGH), a community hospital in Massachusetts, to take the ComprehENotes test, a web-based test of EHR note comprehension. Participants were randomly assigned to control (n=85) or intervention (n=89) groups to take the test without or with NoteAid, respectively. For comparison, we used a sample of 200 participants recruited from AMT to take the ComprehENotes test (100 in the control group and 100 in the intervention group). RESULTS A total of 174 participants were recruited from LGH, and 200 participants were recruited from AMT. Participants in both intervention groups (community hospital and AMT) scored significantly higher than participants in the control groups (P<.001). The average score for the community hospital participants was significantly lower than the average score for the AMT participants (P<.001), consistent with the lower education levels in the community hospital sample. Education level had a significant effect on scores for the community hospital participants (P<.001). CONCLUSIONS Use of NoteAid was associated with significantly improved EHR note comprehension in both community hospital and AMT samples. Our results demonstrate the generalizability of ComprehENotes as a test of EHR note comprehension and the effectiveness of NoteAid for improving EHR note comprehension.
Collapse
Affiliation(s)
- John P Lalor
- Department of Information Technology, Analytics, and Operations, Mendoza College of Business, University of Notre Dame, Notre Dame, IN, United States
| | - Wen Hu
- Department of Computer Science, University of Massachusetts Lowell, Lowell, MA, United States
| | - Matthew Tran
- Department of Computer Science, University of Massachusetts Lowell, Lowell, MA, United States
| | - Hao Wu
- Department of Psychology and Human Development, Peabody College, Vanderbilt University, Nashville, TN, United States
| | - Kathleen M Mazor
- Meyers Primary Care Institute, University of Massachusetts Medical School/Reliant Medical Group/Fallon Health, Worcester, MA, United States
- Department of Medicine, University of Massachusetts Medical School, Worcester, MA, United States
| | - Hong Yu
- Department of Computer Science, University of Massachusetts Lowell, Lowell, MA, United States
- Department of Medicine, University of Massachusetts Medical School, Worcester, MA, United States
- College of Information and Computer Sciences, University of Massachusetts Amherst, Amherst, MA, United States
- Center for Healthcare Organization and Implementation Research, Bedford Veterans Affairs Medical Center, Bedford, MA, United States
| |
Collapse
|
18
|
Blok AC, Amante DJ, Hogan TP, Sadasivam RS, Shimada SL, Woods S, Nazi KM, Houston TK. Impact of Patient Access to Online VA Notes on Healthcare Utilization and Clinician Documentation: a Retrospective Cohort Study. J Gen Intern Med 2021; 36:592-599. [PMID: 33443693 PMCID: PMC7947092 DOI: 10.1007/s11606-020-06304-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2019] [Accepted: 10/07/2020] [Indexed: 11/26/2022]
Abstract
BACKGROUND In an effort to foster patient engagement, some healthcare systems provide their patients with open notes, enabling them to access their clinical notes online. In January 2013, the Veterans Health Administration (VA) implemented online access to clinical notes ("VA Notes") through the Blue Button feature of its patient portal. OBJECTIVE To measure the association of online patient access to clinical notes with changes in healthcare utilization and clinician documentation behaviors. DESIGN A retrospective cohort study. PATIENTS Patients accessing My HealtheVet (MHV), the VA's online patient portal, between July 2011 and January 2015. MAIN MEASURES Use of healthcare services (primary care clinic visits and online electronic secure messaging), and characteristics of physician clinical documentation (readability of notes). KEY RESULTS Among 882,575 unique portal users, those who accessed clinical notes (16.2%; N = 122,972) were younger, more racially homogenous (white), and less likely to be financially vulnerable. Compared with non-users, Notes users more frequently used the secure messaging feature on the portal (mean of 2.6 messages (SD 7.0) v. 0.87 messages (SD 3.3) in January-July 2013), but their higher use of secure messaging began prior to VA Notes implementation, and thus was not temporally related to the implementation. When comparing clinic visit rates pre- and post-implementation, Notes users had a small but significant increase in rate of 0.36 primary care clinic visits (2012 v. 2013) compared to portal users who did not view their Notes (p = 0.01). At baseline, the mean reading ease of primary care clinical notes was 53.8 (SD 10.1) and did not improve after implementation of VA Notes. CONCLUSIONS VA Notes users were different than patients with portal access who did not view their notes online, and they had higher rates of healthcare service use prior to and after VA Notes implementation. Opportunities exist to improve clinical note access and readability.
Collapse
Affiliation(s)
- Amanda C Blok
- Veterans Affairs Center for Clinical Management Research, Veterans Affairs Ann Arbor Healthcare System, United States Department of Veterans Affairs, 2215 Fuller Road, Mail Stop 152, Ann Arbor, MI, USA.
- Systems, Populations and Leadership Department, School of Nursing, University of Michigan, Ann Arbor, MI, USA.
| | - Daniel J Amante
- Division of Health Informatics and Implementation Science, Department of Population and Quantitative Health Sciences, University of Massachusetts Medical School, Worcester, MA, USA
| | - Timothy P Hogan
- Veterans Affairs Center for Healthcare Organization and Implementation Research, Veterans Affairs Bedford Medical Center, United States Department of Veterans Affairs, Bedford, MA, USA
- Department of Population and Data Sciences, UT Southwestern Medical Center, Dallas, TX, USA
| | - Rajani S Sadasivam
- Division of Health Informatics and Implementation Science, Department of Population and Quantitative Health Sciences, University of Massachusetts Medical School, Worcester, MA, USA
| | - Stephanie L Shimada
- Division of Health Informatics and Implementation Science, Department of Population and Quantitative Health Sciences, University of Massachusetts Medical School, Worcester, MA, USA
- Veterans Affairs Center for Healthcare Organization and Implementation Research, Veterans Affairs Bedford Medical Center, United States Department of Veterans Affairs, Bedford, MA, USA
- Department of Health Law, Policy, and Management, Boston University School of Public Health, Boston, MA, USA
| | - Susan Woods
- Maine Behavioral Healthcare, South Portland, ME, USA
| | - Kim M Nazi
- KMN Consulting Services, Coxsackie, NY, USA
| | - Thomas K Houston
- Learning Health Systems, Department of Medicine, Wake Forest University, Winston-Salem, NC, USA
| |
Collapse
|
19
|
Moore N, Yoo S, Poronnik P, Brown M, Ahmadpour N. Exploring User Needs in the Development of a Virtual Reality-Based Advanced Life Support Training Platform: Exploratory Usability Study. JMIR Serious Games 2020; 8:e20797. [PMID: 32763877 PMCID: PMC7442950 DOI: 10.2196/20797] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 06/25/2020] [Accepted: 07/26/2020] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND Traditional methods of delivering Advanced Life Support (ALS) training and reaccreditation are resource-intensive and costly. Interactive simulations and gameplay using virtual reality (VR) technology can complement traditional training processes as a cost-effective, engaging, and flexible training tool. OBJECTIVE This exploratory study aimed to determine the specific user needs of clinicians engaging with a new interactive VR ALS simulation (ALS-SimVR) application to inform the ongoing development of such training platforms. METHODS Semistructured interviews were conducted with experienced clinicians (n=10, median age=40.9 years) following a single playthrough of the application. All clinicians have been directly involved in the delivery of ALS training in both clinical and educational settings (median years of ALS experience=12.4; all had minimal or no VR experience). Interviews were supplemented with an assessment of usability (using heuristic evaluation) and presence. RESULTS The ALS-SimVR training app was well received. Thematic analysis of the interviews revealed five main areas of user needs that can inform future design efforts for creating engaging VR training apps: affordances, agency, diverse input modalities, mental models, and advanced roles. CONCLUSIONS This study was conducted to identify the needs of clinicians engaging with ALS-SimVR. However, our findings revealed broader design considerations that will be crucial in guiding future work in this area. Although aligning the training scenarios with accepted teaching algorithms is important, our findings reveal that improving user experience and engagement requires careful attention to technology-specific issues such as input modalities.
Collapse
Affiliation(s)
- Nathan Moore
- Research and Education Network, Western Sydney Local Health District, Westmead, Australia
| | - Soojeong Yoo
- Design Lab, Sydney School of Architecture, Design and Planning, The University of Sydney, Sydney, Australia
| | - Philip Poronnik
- School of Medical Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia
| | - Martin Brown
- Innovative Technologies, Office of the Vice-Chancellor and Principal Westmead Operations, The University of Sydney, Sydney, Australia
| | - Naseem Ahmadpour
- Design Lab, Sydney School of Architecture, Design and Planning, The University of Sydney, Sydney, Australia
| |
Collapse
|
20
|
Bala S, Keniston A, Burden M. Patient Perception of Plain-Language Medical Notes Generated Using Artificial Intelligence Software: Pilot Mixed-Methods Study. JMIR Form Res 2020; 4:e16670. [PMID: 32442148 PMCID: PMC7305564 DOI: 10.2196/16670] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Revised: 01/05/2020] [Accepted: 03/29/2020] [Indexed: 01/17/2023] Open
Abstract
Background Clinicians’ time with patients has become increasingly limited due to regulatory burden, documentation and billing, administrative responsibilities, and market forces. These factors limit clinicians’ time to deliver thorough explanations to patients. OpenNotes began as a research initiative exploring the ability of sharing medical notes with patients to help patients understand their health care. Providing patients access to their medical notes has been shown to have many benefits, including improved patient satisfaction and clinical outcomes. OpenNotes has since evolved into a national movement that helps clinicians share notes with patients. However, a significant barrier to the widespread adoption of OpenNotes has been clinicians’ concerns that OpenNotes may cost additional time to correct patient confusion over medical language. Recent advances in artificial intelligence (AI) technology may help resolve this concern by converting medical notes to plain language with minimal time required of clinicians. Objective This pilot study assesses patient comprehension and perceived benefits, concerns, and insights regarding an AI-simplified note through comprehension questions and guided interview. Methods Synthea, a synthetic patient generator, was used to generate a standardized medical-language patient note which was then simplified using AI software. A multiple-choice comprehension assessment questionnaire was drafted with physician input. Study participants were recruited from inpatients at the University of Colorado Hospital. Participants were randomly assigned to be tested for their comprehension of the standardized medical-language version or AI-generated plain-language version of the patient note. Following this, participants reviewed the opposite version of the note and participated in a guided interview. A Student t test was performed to assess for differences in comprehension assessment scores between plain-language and medical-language note groups. Multivariate modeling was performed to assess the impact of demographic variables on comprehension. Interview responses were thematically analyzed. Results Twenty patients agreed to participate. The mean number of comprehension assessment questions answered correctly was found to be higher in the plain-language group compared with the medical-language group; however, the Student t test was found to be underpowered to determine if this was significant. Age, ethnicity, and health literacy were found to have a significant impact on comprehension scores by multivariate modeling. Thematic analysis of guided interviews highlighted patients’ perceived benefits, concerns, and suggestions regarding such notes. Major themes of benefits were that simplified plain-language notes may (1) be more useable than unsimplified medical-language notes, (2) improve the patient-clinician relationship, and (3) empower patients through an enhanced understanding of their health care. Conclusions AI software may translate medical notes into plain-language notes that are perceived as beneficial by patients. Limitations included sample size, inpatient-only setting, and possible confounding factors. Larger studies are needed to assess comprehension. Insight from patient responses to guided interviews can guide the future study and development of this technology.
Collapse
Affiliation(s)
- Sandeep Bala
- College of Medicine, University of Central Florida, Orlando, FL, United States
| | - Angela Keniston
- Division of Hospital Medicine, University of Colorado School of Medicine, Aurora, CO, United States
| | - Marisha Burden
- Division of Hospital Medicine, University of Colorado School of Medicine, Aurora, CO, United States
| |
Collapse
|
21
|
Esmaeilzadeh P, Mirzaei T, Maddah M. The effects of data entry structure on patients’ perceptions of information quality in Health Information Exchange (HIE). Int J Med Inform 2020; 135:104058. [PMID: 31884311 DOI: 10.1016/j.ijmedinf.2019.104058] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 12/11/2019] [Accepted: 12/19/2019] [Indexed: 01/24/2023]
|
22
|
Dias R, Torkamani A. Artificial intelligence in clinical and genomic diagnostics. Genome Med 2019; 11:70. [PMID: 31744524 PMCID: PMC6865045 DOI: 10.1186/s13073-019-0689-8] [Citation(s) in RCA: 173] [Impact Index Per Article: 28.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 11/08/2019] [Indexed: 12/13/2022] Open
Abstract
Artificial intelligence (AI) is the development of computer systems that are able to perform tasks that normally require human intelligence. Advances in AI software and hardware, especially deep learning algorithms and the graphics processing units (GPUs) that power their training, have led to a recent and rapidly increasing interest in medical AI applications. In clinical diagnostics, AI-based computer vision approaches are poised to revolutionize image-based diagnostics, while other AI subtypes have begun to show similar promise in various diagnostic modalities. In some areas, such as clinical genomics, a specific type of AI algorithm known as deep learning is used to process large and complex genomic datasets. In this review, we first summarize the main classes of problems that AI systems are well suited to solve and describe the clinical diagnostic tasks that benefit from these solutions. Next, we focus on emerging methods for specific tasks in clinical genomics, including variant calling, genome annotation and variant classification, and phenotype-to-genotype correspondence. Finally, we end with a discussion on the future potential of AI in individualized medicine applications, especially for risk prediction in common complex diseases, and the challenges, limitations, and biases that must be carefully addressed for the successful deployment of AI in medical applications, particularly those utilizing human genetics and genomics data.
Collapse
Affiliation(s)
- Raquel Dias
- The Scripps Translational Science Institute, The Scripps Research Institute, 3344 North Torrey Pines Court Suite 300, La Jolla, CA, 92037, USA
- Department of Integrative Structural and Computational Biology, The Scripps Research Institute, 3344 North Torrey Pines Court Suite 300, La Jolla, CA, 92037, USA
| | - Ali Torkamani
- The Scripps Translational Science Institute, The Scripps Research Institute, 3344 North Torrey Pines Court Suite 300, La Jolla, CA, 92037, USA.
- Department of Integrative Structural and Computational Biology, The Scripps Research Institute, 3344 North Torrey Pines Court Suite 300, La Jolla, CA, 92037, USA.
| |
Collapse
|
23
|
Li F, Jin Y, Liu W, Rawat BPS, Cai P, Yu H. Fine-Tuning Bidirectional Encoder Representations From Transformers (BERT)-Based Models on Large-Scale Electronic Health Record Notes: An Empirical Study. JMIR Med Inform 2019; 7:e14830. [PMID: 31516126 PMCID: PMC6746103 DOI: 10.2196/14830] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Revised: 07/13/2019] [Accepted: 07/19/2019] [Indexed: 01/17/2023] Open
Abstract
BACKGROUND The bidirectional encoder representations from transformers (BERT) model has achieved great success in many natural language processing (NLP) tasks, such as named entity recognition and question answering. However, little prior work has explored this model to be used for an important task in the biomedical and clinical domains, namely entity normalization. OBJECTIVE We aim to investigate the effectiveness of BERT-based models for biomedical or clinical entity normalization. In addition, our second objective is to investigate whether the domains of training data influence the performances of BERT-based models as well as the degree of influence. METHODS Our data was comprised of 1.5 million unlabeled electronic health record (EHR) notes. We first fine-tuned BioBERT on this large collection of unlabeled EHR notes. This generated our BERT-based model trained using 1.5 million electronic health record notes (EhrBERT). We then further fine-tuned EhrBERT, BioBERT, and BERT on three annotated corpora for biomedical and clinical entity normalization: the Medication, Indication, and Adverse Drug Events (MADE) 1.0 corpus, the National Center for Biotechnology Information (NCBI) disease corpus, and the Chemical-Disease Relations (CDR) corpus. We compared our models with two state-of-the-art normalization systems, namely MetaMap and disease name normalization (DNorm). RESULTS EhrBERT achieved 40.95% F1 in the MADE 1.0 corpus for mapping named entities to the Medical Dictionary for Regulatory Activities and the Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT), which have about 380,000 terms. In this corpus, EhrBERT outperformed MetaMap by 2.36% in F1. For the NCBI disease corpus and CDR corpus, EhrBERT also outperformed DNorm by improving the F1 scores from 88.37% and 89.92% to 90.35% and 93.82%, respectively. Compared with BioBERT and BERT, EhrBERT outperformed them on the MADE 1.0 corpus and the CDR corpus. CONCLUSIONS Our work shows that BERT-based models have achieved state-of-the-art performance for biomedical and clinical entity normalization. BERT-based models can be readily fine-tuned to normalize any kind of named entities.
Collapse
Affiliation(s)
- Fei Li
- Department of Computer Science, University of Massachusetts Lowell, Lowell, MA, United States
- Center for Healthcare Organization and Implementation Research, Bedford Veterans Affairs Medical Center, Bedford, MA, United States
- Department of Medicine, University of Massachusetts Medical School, Worcester, MA, United States
| | - Yonghao Jin
- Department of Computer Science, University of Massachusetts Lowell, Lowell, MA, United States
| | - Weisong Liu
- Department of Computer Science, University of Massachusetts Lowell, Lowell, MA, United States
- Center for Healthcare Organization and Implementation Research, Bedford Veterans Affairs Medical Center, Bedford, MA, United States
- Department of Medicine, University of Massachusetts Medical School, Worcester, MA, United States
| | | | - Pengshan Cai
- School of Computer Science, University of Massachusetts, Amherst, MA, United States
| | - Hong Yu
- Department of Computer Science, University of Massachusetts Lowell, Lowell, MA, United States
- Center for Healthcare Organization and Implementation Research, Bedford Veterans Affairs Medical Center, Bedford, MA, United States
- Department of Medicine, University of Massachusetts Medical School, Worcester, MA, United States
- School of Computer Science, University of Massachusetts, Amherst, MA, United States
| |
Collapse
|
24
|
Grabar N, Grouin C, Section Editors for the IMIA Yearbook Section on Natural Language Processing . A Year of Papers Using Biomedical Texts: Findings from the Section on Natural Language Processing of the IMIA Yearbook. Yearb Med Inform 2019; 28:218-222. [PMID: 31419835 PMCID: PMC6697498 DOI: 10.1055/s-0039-1677937] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
OBJECTIVES To analyze the content of publications within the medical Natural Language Processing (NLP) domain in 2018. METHODS Automatic and manual pre-selection of publications to be reviewed, and selection of the best NLP papers of the year. Analysis of the important issues. RESULTS Two best papers have been selected this year. One dedicated to the generation of multi- documents summaries and another dedicated to the generation of imaging reports. We also proposed an analysis of the content of main research trends of NLP publications in 2018. CONCLUSIONS The year 2018 is very rich with regard to NLP issues and topics addressed. It shows the will of researchers to go towards robust and reproducible results. Researchers also prove to be creative for original issues and approaches.
Collapse
Affiliation(s)
- Natalia Grabar
- LIMSI, CNRS, Université Paris-Saclay, Orsay, France
- STL, CNRS, Université de Lille, Villeneuve-d'Ascq, France
| | - Cyril Grouin
- LIMSI, CNRS, Université Paris-Saclay, Orsay, France
| | | |
Collapse
|
25
|
Wang Y, Sohn S, Liu S, Shen F, Wang L, Atkinson EJ, Amin S, Liu H. A clinical text classification paradigm using weak supervision and deep representation. BMC Med Inform Decis Mak 2019; 19:1. [PMID: 30616584 PMCID: PMC6322223 DOI: 10.1186/s12911-018-0723-6] [Citation(s) in RCA: 171] [Impact Index Per Article: 28.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Accepted: 12/10/2018] [Indexed: 01/02/2023] Open
Abstract
BACKGROUND Automatic clinical text classification is a natural language processing (NLP) technology that unlocks information embedded in clinical narratives. Machine learning approaches have been shown to be effective for clinical text classification tasks. However, a successful machine learning model usually requires extensive human efforts to create labeled training data and conduct feature engineering. In this study, we propose a clinical text classification paradigm using weak supervision and deep representation to reduce these human efforts. METHODS We develop a rule-based NLP algorithm to automatically generate labels for the training data, and then use the pre-trained word embeddings as deep representation features for training machine learning models. Since machine learning is trained on labels generated by the automatic NLP algorithm, this training process is called weak supervision. We evaluat the paradigm effectiveness on two institutional case studies at Mayo Clinic: smoking status classification and proximal femur (hip) fracture classification, and one case study using a public dataset: the i2b2 2006 smoking status classification shared task. We test four widely used machine learning models, namely, Support Vector Machine (SVM), Random Forest (RF), Multilayer Perceptron Neural Networks (MLPNN), and Convolutional Neural Networks (CNN), using this paradigm. Precision, recall, and F1 score are used as metrics to evaluate performance. RESULTS CNN achieves the best performance in both institutional tasks (F1 score: 0.92 for Mayo Clinic smoking status classification and 0.97 for fracture classification). We show that word embeddings significantly outperform tf-idf and topic modeling features in the paradigm, and that CNN captures additional patterns from the weak supervision compared to the rule-based NLP algorithms. We also observe two drawbacks of the proposed paradigm that CNN is more sensitive to the size of training data, and that the proposed paradigm might not be effective for complex multiclass classification tasks. CONCLUSION The proposed clinical text classification paradigm could reduce human efforts of labeled training data creation and feature engineering for applying machine learning to clinical text classification by leveraging weak supervision and deep representation. The experimental experiments have validated the effectiveness of paradigm by two institutional and one shared clinical text classification tasks.
Collapse
Affiliation(s)
- Yanshan Wang
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Sunghwan Sohn
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Sijia Liu
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Feichen Shen
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Liwei Wang
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Elizabeth J. Atkinson
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Shreyasee Amin
- Division of Rheumatology, Department of Medicine, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
- Division of Epidemiology, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| | - Hongfang Liu
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, 200 1st ST SW, Rochester, MN 55905 USA
| |
Collapse
|
26
|
Metting E, Schrage AJ, Kocks JW, Sanderman R, van der Molen T. Assessing the Needs and Perspectives of Patients With Asthma and Chronic Obstructive Pulmonary Disease on Patient Web Portals: Focus Group Study. JMIR Form Res 2018; 2:e22. [PMID: 30684436 PMCID: PMC6334706 DOI: 10.2196/formative.8822] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2017] [Revised: 04/27/2018] [Accepted: 06/18/2018] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND As accessibility to the internet has increased in society, many health care organizations have developed patient Web portals (PWPs), which can provide a range of self-management options to improve patient access. However, the available evidence suggests that they are used inefficiently and do not benefit patients with low health literacy. Asthma and chronic obstructive pulmonary disease (COPD) are common chronic diseases that require ongoing self-management. Moreover, patients with COPD are typically older and have lower health literacy. OBJECTIVE This study aimed to obtain and present an overview of patients' perspectives of PWPs to facilitate the development of a portal that better meets the needs of patients with asthma and COPD. METHODS We performed a focus group study using semistructured interviews in 3 patient groups from the north of the Netherlands who were recruited through the Dutch Lung Foundation. Each group met 3 times for 2 hours each at a 1-week interval. Data were analyzed with coding software, and patient descriptors were analyzed with nonparametric tests. The consolidated criteria for reporting qualitative research were followed when conducting the study. RESULTS We included 29 patients (16/29, 55% male; mean age 65 [SD 10] years) with COPD (n=14), asthma-COPD overlap (n=4), asthma (n=10), or other respiratory disease (n=1). There was a large variation in the internet experience; some patients hardly used the internet (4/29, 14%), whereas others used internet >3 times a week (23/29, 79%). In general, patients were positive about having access to a PWP, considering access to personal medical records as the most important option, though only after discussion with their physician. A medication overview was considered a useful option. We found that communication between health care professionals could be improved if patients could use the PWP to share information with their health care professionals. However, as participants were worried about the language and usability of portals, it was recommended that language should be adapted to the patient level. Another concern was that disease monitoring through Web-based questionnaire use would only be useful if the results were discussed with health care professionals. CONCLUSIONS Participants were positive about PWPs and considered them a logical step. Today, most patients tend to be better educated and have internet access, while also being more assertive and better informed about their disease. A PWP could support these patients. Our participants also provided practical suggestions for implementation in current and future PWP developments. The next step will be to develop a portal based on these recommendations and assess whether it meets the needs of patients and health care providers.
Collapse
Affiliation(s)
- Esther Metting
- Groningen Research Institute for Asthma and COPD, Department of General Practice and Elderly Care Medicine, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Aaltje Jantine Schrage
- Groningen Research Institute for Asthma and COPD, Department of General Practice and Elderly Care Medicine, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Janwillem Wh Kocks
- Groningen Research Institute for Asthma and COPD, Department of General Practice and Elderly Care Medicine, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Robbert Sanderman
- GZW-Health Psychology-GZW-General, University Medical Center Groningen, University of Groningen, Groningen, Netherlands.,Department of Psychology, Health & Technology, Faculty of Behavioural, Management and Social Sciences, University of Twente, Enschede, Netherlands
| | - Thys van der Molen
- Groningen Research Institute for Asthma and COPD, Department of General Practice and Elderly Care Medicine, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| |
Collapse
|