1
|
Dinari F, Bahaadinbeigy K, Bassiri S, Mashouf E, Bastaminejad S, Moulaei K. Benefits, barriers, and facilitators of using speech recognition technology in nursing documentation and reporting: A cross-sectional study. Health Sci Rep 2023; 6:e1330. [PMID: 37313530 PMCID: PMC10259462 DOI: 10.1002/hsr2.1330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 05/18/2023] [Accepted: 05/31/2023] [Indexed: 06/15/2023] Open
Abstract
Background and Aim Nursing reports are necessary for clinical communication and provide an accurate reflection of nursing assessments, care provided, changes in clinical status, and patient-related information to support the multidisciplinary team to provide individualized care. Nurses always face challenges in recording and documenting nursing reports. Speech recognition systems (SRS), as one of the documentation technologies, can play a potential role in recording medical reports. Therefore, this study seeks to identify the barriers, benefits, and facilitators of utilizing speech recognition technology in nursing reports. Materials and Methods This cross-sectional was conducted through a researcher-made questionnaire in 2022. Invitations were sent to 200 ICU nurses working in the three educational hospitals of Imam Reza (AS), Qaem and Imam Zaman in Mashhad city (Iran), 125 of whom accepted our invitation. Finally, 73 nurses included the study based on inclusion and exclusion criteria. Data analysis was performed using SPSS 22.0. Results According to the nurses, "paperwork reduction" (3.96, ±1.96), "performance improvement" (3.96, ±0.93), and "cost reduction" (3.95, ±1.07) were the most common benefits of using the SRS. "Lack of specialized, technical, and experienced staff to teach nurses how to work with speech recognition systems" (3.59, ±1.18), "insufficient training of nurses" (3.59, ±1.11), and "need to edit and control quality and correct documents" (3.59, ±1.03) were the most common barriers to using SRS. As well as "ability to fully review documentation processes" (3.62, ±1.13), "creation of integrated data in record documentation" (3.58, ±1.15), "possibility of error correction for nurses" (3.51, ±1.16) were the most common facilitators. There was no significant relationship between nurses' demographic information and the benefits, barriers, and facilitators. Conclusions By providing information on the benefits, barriers, and facilitators of using this technology, hospital managers, nursing managers, and information technology managers of healthcare centers can make more informed decisions in selecting and implementing SRS for nursing report documentation. This will help to avoid potential challenges that may reduce the efficiency, effectiveness, and productivity of the systems.
Collapse
Affiliation(s)
- Fatemeh Dinari
- Medical Informatics Research Center, Institute for Futures Studies in HealthKerman University of Medical SciencesKermanIran
| | - Kambiz Bahaadinbeigy
- Medical Informatics Research Center, Institute for Futures Studies in HealthKerman University of Medical SciencesKermanIran
| | - Somayyeh Bassiri
- Branch Artificial IntelligentIslamic Azad University MashhadMashhadIran
| | - Esmat Mashouf
- Department of Health Information TechnologyVarastegan Institute for Medical SciencesMashhadIran
| | - Saiyad Bastaminejad
- Department of Genetics, Faculty of ParamedicalIlam University of Medical SciencesIlamIran
| | - Khadijeh Moulaei
- Department of Health Information Technology, Faculty of ParamedicalIlam University of Medical SciencesIlamIran
| |
Collapse
|
2
|
Schuurman AR, Baarsma ME, Wiersinga WJ, Hovius JW. Digital disparities among healthcare workers in typing speed between generations, genders, and medical specialties: cross sectional study. BMJ 2022; 379:e072784. [PMID: 36535672 PMCID: PMC9762353 DOI: 10.1136/bmj-2022-072784] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
OBJECTIVE To investigate the typing skills of healthcare professionals. DESIGN Cross sectional study. SETTING Two large tertiary medical centres in Amsterdam, the Netherlands. PARTICIPANTS 2690 hospital employees working in patient care, research, or medical education. MAIN OUTCOME MEASURES Participants completed a custom built, web based, Santa themed, typing test in 60 seconds and filled out an associated questionnaire. The primary outcome was corrected typing speed, defined as crude typing speed (words per minute) multiplied by accuracy (correct characters as a percentage of total characters in the final transcribed text). Feelings towards administrative tasks scored on the Visual Analogue Scale to Weigh Respondents' Internalised Typing Enjoyment (VAS-WRITE), in which 0 represents the most negative and 100 the most positive feelings towards administration, were also recorded. RESULTS Between 18 and 21 May 2021, a representative cohort of 2690 study participants was recruited (1942 (72.2%) were younger than 40 years; 2065 (76.8%) were women). Respondents' mean typing speed was 60.1 corrected words per minute (standard deviation 20.8; range 8.0-136.6) with substantial differences between professions and specialties, in which physicians in internal medicine were the fastest among the medical professionals. Typing speed decreased significantly with every age decade (rho -0.51, P<0.001), and people with a history of completing a typing course were more than 20% faster than those who had not (mean difference 12.1 words (standard error 0.8), (95% confidence interval 10.6 to 13.6), P<0.001). The corrected typing speed did not differ between genders (0.5 (0.9) words, (-1.4 to 2.4), P=0.61). Women were less negative towards administration than were men (mean difference VAS-WRITE score 7.68 (standard error 1.17), (95% confidence interval 5.33 to 10.03), P<0.001). Of all professional groups, medical staff reported the most negative feelings towards administration (mean VAS-WRITE score of 33.5 (standard deviation 22.9)). CONCLUSIONS Important differences were reported in typing proficiency between age groups, professions, and medical specialties. Specific groups are at a disadvantage in an increasingly digitalised healthcare system, and these data could inform the implementation of training modules and alternative methods of data entry to level the playing field.
Collapse
Affiliation(s)
- Alex R Schuurman
- Amsterdam UMC, University of Amsterdam, Centre for Experimental and Molecular Medicine, Amsterdam Institute for Infection and Immunology, Amsterdam, Netherlands
- Amsterdam UMC, University of Amsterdam, Division of Infectious Diseases, Department of Internal Medicine, Amsterdam, Netherlands
| | - M E Baarsma
- Amsterdam UMC, University of Amsterdam, Centre for Experimental and Molecular Medicine, Amsterdam Institute for Infection and Immunology, Amsterdam, Netherlands
- Amsterdam UMC, University of Amsterdam, Division of Infectious Diseases, Department of Internal Medicine, Amsterdam, Netherlands
| | - W Joost Wiersinga
- Amsterdam UMC, University of Amsterdam, Centre for Experimental and Molecular Medicine, Amsterdam Institute for Infection and Immunology, Amsterdam, Netherlands
- Amsterdam UMC, University of Amsterdam, Division of Infectious Diseases, Department of Internal Medicine, Amsterdam, Netherlands
| | - Joppe W Hovius
- Amsterdam UMC, University of Amsterdam, Centre for Experimental and Molecular Medicine, Amsterdam Institute for Infection and Immunology, Amsterdam, Netherlands
- Amsterdam UMC, University of Amsterdam, Division of Infectious Diseases, Department of Internal Medicine, Amsterdam, Netherlands
| |
Collapse
|
3
|
Peivandi S, Ahmadian L, Farokhzadian J, Jahani Y. Evaluation and comparison of errors on nursing notes created by online and offline speech recognition technology and handwritten: an interventional study. BMC Med Inform Decis Mak 2022; 22:96. [PMID: 35395798 PMCID: PMC8994328 DOI: 10.1186/s12911-022-01835-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 03/31/2022] [Indexed: 12/11/2022] Open
Abstract
BACKGROUND Despite the rapid expansion of electronic health records, the use of computer mouse and keyboard, challenges the data entry into these systems. Speech recognition software is one of the substitutes for the mouse and keyboard. The objective of this study was to evaluate the use of online and offline speech recognition software on spelling errors in nursing reports and to compare them with errors in handwritten reports. METHODS For this study, online and offline speech recognition software were selected and customized based on unrecognized terms by these softwares. Two groups of 35 nurses provided the admission notes of hospitalized patients upon their arrival using three data entry methods (using the handwritten method or two types of speech recognition software). After at least a month, they created the same reports using the other methods. The number of spelling errors in each method was determined. These errors were compared between the paper method and the two electronic methods before and after the correction of errors. RESULTS The lowest accuracy was related to online software with 96.4% and accuracy. On the average per report, the online method 6.76, and the offline method 4.56 generated more errors than the paper method. After correcting the errors by the participants, the number of errors in the online reports decreased by 94.75% and the number of errors in the offline reports decreased by 97.20%. The highest number of reports with errors was related to reports created by online software. CONCLUSION Although two software had relatively high accuracy, they created more errors than the paper method that can be lowered by optimizing and upgrading these softwares. The results showed that error correction by users significantly reduced the documentation errors caused by the software.
Collapse
Affiliation(s)
- Sahar Peivandi
- Department of Health Information Sciences, Faculty of Management and Medical Information Sciences, Kerman University of Medical Sciences, Kerman, Iran
| | - Leila Ahmadian
- Department of Health Information Sciences, Faculty of Management and Medical Information Sciences, Kerman University of Medical Sciences, Kerman, Iran.
| | | | - Yunes Jahani
- Modeling in Health Research Center, Institute for Futures Studies in Health, Kerman University of Medical Sciences, Kerman, Iran
| |
Collapse
|
4
|
Sorantin E, Grasser MG, Hemmelmayr A, Tschauner S, Hrzic F, Weiss V, Lacekova J, Holzinger A. The augmented radiologist: artificial intelligence in the practice of radiology. Pediatr Radiol 2022; 52:2074-2086. [PMID: 34664088 PMCID: PMC9537212 DOI: 10.1007/s00247-021-05177-7] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 06/03/2021] [Accepted: 08/02/2021] [Indexed: 12/19/2022]
Abstract
In medicine, particularly in radiology, there are great expectations in artificial intelligence (AI), which can "see" more than human radiologists in regard to, for example, tumor size, shape, morphology, texture and kinetics - thus enabling better care by earlier detection or more precise reports. Another point is that AI can handle large data sets in high-dimensional spaces. But it should not be forgotten that AI is only as good as the training samples available, which should ideally be numerous enough to cover all variants. On the other hand, the main feature of human intelligence is content knowledge and the ability to find near-optimal solutions. The purpose of this paper is to review the current complexity of radiology working places, to describe their advantages and shortcomings. Further, we give an AI overview of the different types and features as used so far. We also touch on the differences between AI and human intelligence in problem-solving. We present a new AI type, labeled "explainable AI," which should enable a balance/cooperation between AI and human intelligence - thus bringing both worlds in compliance with legal requirements. For support of (pediatric) radiologists, we propose the creation of an AI assistant that augments radiologists and keeps their brain free for generic tasks.
Collapse
Affiliation(s)
- Erich Sorantin
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria.
| | - Michael G Grasser
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria
| | - Ariane Hemmelmayr
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria
| | - Sebastian Tschauner
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria
| | - Franko Hrzic
- Faculty of Engineering, Department of Computer Engineering, University of Rijeka, Vukovarska 58, Rijeka, 51000, Croatia
| | - Veronika Weiss
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria
| | - Jana Lacekova
- Division of Pediatric Radiology, Department of Radiology, Medical University Graz, Auenbruggerplatz 36, A - 8036, Graz, Austria
| | - Andreas Holzinger
- Institute for Medical Informatics, Statistics and Documentation, Medical University Graz, Graz, Austria
| |
Collapse
|
5
|
Gondim Teixeira PA, Leplat C, Lombard C, Rauch A, Germain E, Waled AA, Jendoubi S, Bonarelli C, Padoin P, Simon L, Gillet R, Blum A. Alternative PACS interface devices are well-accepted and may reduce radiologist’s musculoskeletal discomfort as compared to keyboard-mouse-recording device. Eur Radiol 2020; 30:5200-5208. [DOI: 10.1007/s00330-020-06851-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 03/10/2020] [Accepted: 03/31/2020] [Indexed: 10/24/2022]
|
6
|
Zech J, Forde J, Titano JJ, Kaji D, Costa A, Oermann EK. Detecting insertion, substitution, and deletion errors in radiology reports using neural sequence-to-sequence models. ANNALS OF TRANSLATIONAL MEDICINE 2019; 7:233. [PMID: 31317003 DOI: 10.21037/atm.2018.08.11] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Background Errors in grammar, spelling, and usage in radiology reports are common. To automatically detect inappropriate insertions, deletions, and substitutions of words in radiology reports, we proposed using a neural sequence-to-sequence (seq2seq) model. Methods Head CT and chest radiograph reports from Mount Sinai Hospital (MSH) (n=61,722 and 818,978, respectively), Mount Sinai Queens (MSQ) (n=30,145 and 194,309, respectively) and MIMIC-III (n=32,259 and 54,685) were converted into sentences. Insertions, substitutions, and deletions of words were randomly introduced. Seq2seq models were trained using corrupted sentences as input to predict original uncorrupted sentences. Three models were trained using head CTs from MSH, chest radiographs from MSH, and head CTs from all three collections. Model performance was assessed across different sites and modalities. A sample of original, uncorrupted sentences were manually reviewed for any error in syntax, usage, or spelling to estimate real-world proofreading performance of the algorithm. Results Seq2seq detected 90.3% and 88.2% of corrupted sentences with 97.7% and 98.8% specificity in same-site, same-modality test sets for head CTs and chest radiographs, respectively. Manual review of original, uncorrupted same-site same-modality head CT sentences demonstrated seq2seq positive predictive value (PPV) 0.393 (157/400; 95% CI, 0.346-0.441) and negative predictive value (NPV) 0.986 (789/800; 95% CI, 0.976-0.992) for detecting sentences containing real-world errors, with estimated sensitivity of 0.389 (95% CI, 0.267-0.542) and specificity 0.986 (95% CI, 0.985-0.987) over n=86,211 uncorrupted training examples. Conclusions Seq2seq models can be highly effective at detecting erroneous insertions, deletions, and substitutions of words in radiology reports. To achieve high performance, these models require site- and modality-specific training examples. Incorporating additional targeted training data could further improve performance in detecting real-world errors in reports.
Collapse
Affiliation(s)
- John Zech
- Department of Radiology, Icahn School of Medicine, New York, NY, USA
| | | | - Joseph J Titano
- Department of Radiology, Icahn School of Medicine, New York, NY, USA
| | - Deepak Kaji
- Department of Neurosurgery, Icahn School of Medicine, New York, NY, USA
| | - Anthony Costa
- Department of Neurosurgery, Icahn School of Medicine, New York, NY, USA
| | - Eric Karl Oermann
- Department of Neurosurgery, Icahn School of Medicine, New York, NY, USA
| |
Collapse
|
7
|
Blackley SV, Huynh J, Wang L, Korach Z, Zhou L. Speech recognition for clinical documentation from 1990 to 2018: a systematic review. J Am Med Inform Assoc 2019; 26:324-338. [PMID: 30753666 PMCID: PMC7647182 DOI: 10.1093/jamia/ocy179] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Revised: 11/16/2018] [Accepted: 11/28/2018] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVE The study sought to review recent literature regarding use of speech recognition (SR) technology for clinical documentation and to understand the impact of SR on document accuracy, provider efficiency, institutional cost, and more. MATERIALS AND METHODS We searched 10 scientific and medical literature databases to find articles about clinician use of SR for documentation published between January 1, 1990, and October 15, 2018. We annotated included articles with their research topic(s), medical domain(s), and SR system(s) evaluated and analyzed the results. RESULTS One hundred twenty-two articles were included. Forty-eight (39.3%) involved the radiology department exclusively and 10 (8.2%) involved emergency medicine; 10 (8.2%) mentioned multiple departments. Forty-eight (39.3%) articles studied productivity; 20 (16.4%) studied the effect of SR on documentation time, with mixed findings. Decreased turnaround time was reported in all 19 (15.6%) studies in which it was evaluated. Twenty-nine (23.8%) studies conducted error analyses, though various evaluation metrics were used. Reported percentage of documents with errors ranged from 4.8% to 71%; reported word error rates ranged from 7.4% to 38.7%. Seven (5.7%) studies assessed documentation-associated costs; 5 reported decreases and 2 reported increases. Many studies (44.3%) used products by Nuance Communications. Other vendors included IBM (9.0%) and Philips (6.6%); 7 (5.7%) used self-developed systems. CONCLUSION Despite widespread use of SR for clinical documentation, research on this topic remains largely heterogeneous, often using different evaluation metrics with mixed findings. Further, that SR-assisted documentation has become increasingly common in clinical settings beyond radiology warrants further investigation of its use and effectiveness in these settings.
Collapse
Affiliation(s)
- Suzanne V Blackley
- Clinical and Quality Analysis, Information Systems, Partners HealthCare, Boston, Massachusetts, USA
| | - Jessica Huynh
- General Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, USA
| | - Liqin Wang
- General Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Department of Medicine, Harvard Medical School, Boston, Massachusetts, USA
| | - Zfania Korach
- General Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Department of Medicine, Harvard Medical School, Boston, Massachusetts, USA
| | - Li Zhou
- General Medicine and Primary Care, Brigham and Women's Hospital, Boston, Massachusetts, USA
- Department of Medicine, Harvard Medical School, Boston, Massachusetts, USA
| |
Collapse
|
8
|
Lybarger KJ, Ostendorf M, Riskin E, Payne TH, White AA, Yetisgen M. Asynchronous Speech Recognition Affects Physician Editing of Notes. Appl Clin Inform 2018; 9:782-790. [PMID: 30332689 PMCID: PMC6192791 DOI: 10.1055/s-0038-1673417] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Accepted: 08/26/2018] [Indexed: 10/28/2022] Open
Abstract
OBJECTIVE Clinician progress notes are an important record for care and communication, but there is a perception that electronic notes take too long to write and may not accurately reflect the patient encounter, threatening quality of care. Automatic speech recognition (ASR) has the potential to improve clinical documentation process; however, ASR inaccuracy and editing time are barriers to wider use. We hypothesized that automatic text processing technologies could decrease editing time and improve note quality. To inform the development of these technologies, we studied how physicians create clinical notes using ASR and analyzed note content that is revised or added during asynchronous editing. MATERIALS AND METHODS We analyzed a corpus of 649 dictated clinical notes from 9 physicians. Notes were dictated during rounds to portable devices, automatically transcribed, and edited later at the physician's convenience. Comparing ASR transcripts and the final edited notes, we identified the word sequences edited by physicians and categorized the edits by length and content. RESULTS We found that 40% of the words in the final notes were added by physicians while editing: 6% corresponded to short edits associated with error correction and format changes, and 34% were associated with longer edits. Short error correction edits that affect note accuracy are estimated to be less than 3% of the words in the dictated notes. Longer edits primarily involved insertion of material associated with clinical data or assessment and plans. The longer edits improve note completeness; some could be handled with verbalized commands in dictation. CONCLUSION Process interventions to reduce ASR documentation burden, whether related to technology or the dictation/editing workflow, should apply a portfolio of solutions to address all categories of required edits. Improved processes could reduce an important barrier to broader use of ASR by clinicians and improve note quality.
Collapse
Affiliation(s)
- Kevin J. Lybarger
- Department of Electrical Engineering, University of Washington, Seattle, Washington, United States
| | - Mari Ostendorf
- Department of Electrical Engineering, University of Washington, Seattle, Washington, United States
| | - Eve Riskin
- Department of Electrical Engineering, University of Washington, Seattle, Washington, United States
| | - Thomas H. Payne
- Division of General Internal Medicine, University of Washington, Seattle, Washington, United States
| | - Andrew A. White
- Division of General Internal Medicine, University of Washington, Seattle, Washington, United States
| | - Meliha Yetisgen
- Department of Biomedical & Health Informatics, University of Washington, Seattle, Washington, United States
| |
Collapse
|
9
|
Zhou L, Blackley SV, Kowalski L, Doan R, Acker WW, Landman AB, Kontrient E, Mack D, Meteer M, Bates DW, Goss FR. Analysis of Errors in Dictated Clinical Documents Assisted by Speech Recognition Software and Professional Transcriptionists. JAMA Netw Open 2018; 1:e180530. [PMID: 30370424 PMCID: PMC6203313 DOI: 10.1001/jamanetworkopen.2018.0530] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
IMPORTANCE Accurate clinical documentation is critical to health care quality and safety. Dictation services supported by speech recognition (SR) technology and professional medical transcriptionists are widely used by US clinicians. However, the quality of SR-assisted documentation has not been thoroughly studied. OBJECTIVE To identify and analyze errors at each stage of the SR-assisted dictation process. DESIGN SETTING AND PARTICIPANTS This cross-sectional study collected a stratified random sample of 217 notes (83 office notes, 75 discharge summaries, and 59 operative notes) dictated by 144 physicians between January 1 and December 31, 2016, at 2 health care organizations using Dragon Medical 360 | eScription (Nuance). Errors were annotated in the SR engine-generated document (SR), the medical transcriptionist-edited document (MT), and the physician's signed note (SN). Each document was compared with a criterion standard created from the original audio recordings and medical record review. MAIN OUTCOMES AND MEASURES Error rate; mean errors per document; error frequency by general type (eg, deletion), semantic type (eg, medication), and clinical significance; and variations by physician characteristics, note type, and institution. RESULTS Among the 217 notes, there were 144 unique dictating physicians: 44 female (30.6%) and 10 unknown sex (6.9%). Mean (SD) physician age was 52 (12.5) years (median [range] age, 54 [28-80] years). Among 121 physicians for whom specialty information was available (84.0%), 35 specialties were represented, including 45 surgeons (37.2%), 30 internists (24.8%), and 46 others (38.0%). The error rate in SR notes was 7.4% (ie, 7.4 errors per 100 words). It decreased to 0.4% after transcriptionist review and 0.3% in SNs. Overall, 96.3% of SR notes, 58.1% of MT notes, and 42.4% of SNs contained errors. Deletions were most common (34.7%), then insertions (27.0%). Among errors at the SR, MT, and SN stages, 15.8%, 26.9%, and 25.9%, respectively, involved clinical information, and 5.7%, 8.9%, and 6.4%, respectively, were clinically significant. Discharge summaries had higher mean SR error rates than other types (8.9% vs 6.6%; difference, 2.3%; 95% CI, 1.0%-3.6%; P < .001). Surgeons' SR notes had lower mean error rates than other physicians' (6.0% vs 8.1%; difference, 2.2%; 95% CI, 0.8%-3.5%; P = .002). One institution had a higher mean SR error rate (7.6% vs 6.6%; difference, 1.0%; 95% CI, -0.2% to 2.8%; P = .10) but lower mean MT and SN error rates (0.3% vs 0.7%; difference, -0.3%; 95% CI, -0.63% to -0.04%; P = .03 and 0.2% vs 0.6%; difference, -0.4%; 95% CI, -0.7% to -0.2%; P = .003). CONCLUSIONS AND RELEVANCE Seven in 100 words in SR-generated documents contain errors; many errors involve clinical information. That most errors are corrected before notes are signed demonstrates the importance of manual review, quality assurance, and auditing.
Collapse
Affiliation(s)
- Li Zhou
- Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
- Harvard Medical School, Boston, Massachusetts
| | - Suzanne V. Blackley
- Department of Information Systems, Partners HealthCare, Boston, Massachusetts
| | - Leigh Kowalski
- Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
| | - Raymond Doan
- Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
| | - Warren W. Acker
- Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
- Geisinger Commonwealth School of Medicine, Scranton, Pennsylvania
| | - Adam B. Landman
- Harvard Medical School, Boston, Massachusetts
- Department of Information Systems, Partners HealthCare, Boston, Massachusetts
- Department of Emergency Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
| | - Evgeni Kontrient
- Department of Hospital Medicine, North Shore Medical Center, Salem, Massachusetts
| | - David Mack
- University of Colorado School of Medicine, Aurora
| | - Marie Meteer
- Department of Computer Science, Brandeis University, Waltham, Massachusetts
| | - David W. Bates
- Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
- Harvard Medical School, Boston, Massachusetts
- Department of Information Systems, Partners HealthCare, Boston, Massachusetts
| | - Foster R. Goss
- Department of Emergency Medicine, University of Colorado, Aurora
| |
Collapse
|
10
|
Vogel M, Kaisers W, Wassmuth R, Mayatepek E. Analysis of Documentation Speed Using Web-Based Medical Speech Recognition Technology: Randomized Controlled Trial. J Med Internet Res 2015; 17:e247. [PMID: 26531850 PMCID: PMC4642384 DOI: 10.2196/jmir.5072] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2015] [Revised: 10/11/2015] [Accepted: 10/13/2015] [Indexed: 12/05/2022] Open
Abstract
Background Clinical documentation has undergone a change due to the usage of electronic health records. The core element is to capture clinical findings and document therapy electronically. Health care personnel spend a significant portion of their time on the computer. Alternatives to self-typing, such as speech recognition, are currently believed to increase documentation efficiency and quality, as well as satisfaction of health professionals while accomplishing clinical documentation, but few studies in this area have been published to date. Objective This study describes the effects of using a Web-based medical speech recognition system for clinical documentation in a university hospital on (1) documentation speed, (2) document length, and (3) physician satisfaction. Methods Reports of 28 physicians were randomized to be created with (intervention) or without (control) the assistance of a Web-based system of medical automatic speech recognition (ASR) in the German language. The documentation was entered into a browser’s text area and the time to complete the documentation including all necessary corrections, correction effort, number of characters, and mood of participant were stored in a database. The underlying time comprised text entering, text correction, and finalization of the documentation event. Participants self-assessed their moods on a scale of 1-3 (1=good, 2=moderate, 3=bad). Statistical analysis was done using permutation tests. Results The number of clinical reports eligible for further analysis stood at 1455. Out of 1455 reports, 718 (49.35%) were assisted by ASR and 737 (50.65%) were not assisted by ASR. Average documentation speed without ASR was 173 (SD 101) characters per minute, while it was 217 (SD 120) characters per minute using ASR. The overall increase in documentation speed through Web-based ASR assistance was 26% (P=.04). Participants documented an average of 356 (SD 388) characters per report when not assisted by ASR and 649 (SD 561) characters per report when assisted by ASR. Participants' average mood rating was 1.3 (SD 0.6) using ASR assistance compared to 1.6 (SD 0.7) without ASR assistance (P<.001). Conclusions We conclude that medical documentation with the assistance of Web-based speech recognition leads to an increase in documentation speed, document length, and participant mood when compared to self-typing. Speech recognition is a meaningful and effective tool for the clinical documentation process.
Collapse
Affiliation(s)
- Markus Vogel
- University Children's Hospital Düsseldorf, Department of General Pediatrics, Neonatology and Pediatric Cardiology, Heinrich-Heine-University, Düsseldorf, Germany.
| | | | | | | |
Collapse
|