1
|
Degree of Uncertainty in Reporting Imaging Findings for Necrotizing Enterocolitis: A Secondary Analysis from a Pilot Randomized Diagnostic Trial. Healthcare (Basel) 2024; 12:511. [PMID: 38470621 PMCID: PMC10931429 DOI: 10.3390/healthcare12050511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 02/18/2024] [Accepted: 02/18/2024] [Indexed: 03/14/2024] Open
Abstract
Diagnosis of necrotizing enterocolitis (NEC) relies heavily on imaging, but uncertainty in the language used in imaging reports can result in ambiguity, miscommunication, and potential diagnostic errors. To determine the degree of uncertainty in reporting imaging findings for NEC, we conducted a secondary analysis of the data from a previously completed pilot diagnostic randomized controlled trial (2019-2020). The study population comprised sixteen preterm infants with suspected NEC randomized to abdominal radiographs (AXRs) or AXR + bowel ultrasound (BUS). The level of uncertainty was determined using a four-point Likert scale. Overall, we reviewed radiology reports of 113 AXR and 24 BUS from sixteen preterm infants with NEC concern. The BUS reports showed less uncertainty for reporting pneumatosis, portal venous gas, and free air compared to AXR reports (pneumatosis: 1 [1-1.75) vs. 3 [2-3], p < 0.0001; portal venous gas: 1 [1-1] vs. 1 [1-1], p = 0.02; free air: 1 [1-1] vs. 2 [1-3], p < 0.0001). In conclusion, we found that BUS reports have a lower degree of uncertainty in reporting imaging findings of NEC compared to AXR reports. Whether the lower degree of uncertainty of BUS reports positively impacts clinical decision making in infants with possible NEC remains unknown.
Collapse
|
2
|
Classification of Diagnostic Certainty in Radiology Reports with Deep Learning. Stud Health Technol Inform 2024; 310:569-573. [PMID: 38269873 DOI: 10.3233/shti231029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
A radiology report is prepared for communicating clinical information about observed abnormal structures and clinically important findings with referring clinicians. However, such observations and findings are often accompanied by ambiguous expressions, which can prevent clinicians from accurately interpreting the content of reports. To systematically assess the degree of diagnostic certainty for each observation and finding in a report, we defined an ordinal scale comprising five classes: definite, likely, may represent, unlikely, and denial. Furthermore, we applied a deep learning classification model to determine its applicability to in-house radiology reports. We trained and evaluated the model using 540 in-house chest computed tomography reports. The deep learning model achieved a micro F1-score of 97.61%, which indicated that our ordinal scale was suitable for measuring the diagnostic certainty of observations and findings in a report.
Collapse
|
3
|
Advanced Sampling Technique in Radiology Free-Text Data for Efficiently Building Text Mining Models by Deep Learning in Vertebral Fracture. Diagnostics (Basel) 2024; 14:137. [PMID: 38248014 PMCID: PMC10814913 DOI: 10.3390/diagnostics14020137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2023] [Revised: 12/25/2023] [Accepted: 01/03/2024] [Indexed: 01/23/2024] Open
Abstract
This study aims to establish advanced sampling methods in free-text data for efficiently building semantic text mining models using deep learning, such as identifying vertebral compression fracture (VCF) in radiology reports. We enrolled a total of 27,401 radiology free-text reports of X-ray examinations of the spine. The predictive effects were compared between text mining models built using supervised long short-term memory networks, independently derived by four sampling methods: vector sum minimization, vector sum maximization, stratified, and simple random sampling, using four fixed percentages. The drawn samples were applied to the training set, and the remaining samples were used to validate each group using different sampling methods and ratios. The predictive accuracy was measured using the area under the receiver operating characteristics (AUROC) to identify VCF. At the sampling ratios of 1/10, 1/20, 1/30, and 1/40, the highest AUROC was revealed in the sampling methods of vector sum minimization as confidence intervals of 0.981 (95%CIs: 0.980-0.983)/0.963 (95%CIs: 0.961-0.965)/0.907 (95%CIs: 0.904-0.911)/0.895 (95%CIs: 0.891-0.899), respectively. The lowest AUROC was demonstrated in the vector sum maximization. This study proposes an advanced sampling method, vector sum minimization, in free-text data that can be efficiently applied to build the text mining models by smartly drawing a small amount of critical representative samples.
Collapse
|
4
|
Extracting Clinical Information From Japanese Radiology Reports Using a 2-Stage Deep Learning Approach: Algorithm Development and Validation. JMIR Med Inform 2023; 11:e49041. [PMID: 37991979 PMCID: PMC10686535 DOI: 10.2196/49041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 09/25/2023] [Accepted: 10/03/2023] [Indexed: 11/24/2023] Open
Abstract
Background Radiology reports are usually written in a free-text format, which makes it challenging to reuse the reports. Objective For secondary use, we developed a 2-stage deep learning system for extracting clinical information and converting it into a structured format. Methods Our system mainly consists of 2 deep learning modules: entity extraction and relation extraction. For each module, state-of-the-art deep learning models were applied. We trained and evaluated the models using 1040 in-house Japanese computed tomography (CT) reports annotated by medical experts. We also evaluated the performance of the entire pipeline of our system. In addition, the ratio of annotated entities in the reports was measured to validate the coverage of the clinical information with our information model. Results The microaveraged F1-scores of our best-performing model for entity extraction and relation extraction were 96.1% and 97.4%, respectively. The microaveraged F1-score of the 2-stage system, which is a measure of the performance of the entire pipeline of our system, was 91.9%. Our system showed encouraging results for the conversion of free-text radiology reports into a structured format. The coverage of clinical information in the reports was 96.2% (6595/6853). Conclusions Our 2-stage deep system can extract clinical information from chest and abdomen CT reports accurately and comprehensively.
Collapse
|
5
|
Advancements in Standardizing Radiological Reports: A Comprehensive Review. MEDICINA (KAUNAS, LITHUANIA) 2023; 59:1679. [PMID: 37763797 PMCID: PMC10535385 DOI: 10.3390/medicina59091679] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Revised: 08/18/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023]
Abstract
Standardized radiological reports stimulate debate in the medical imaging field. This review paper explores the advantages and challenges of standardized reporting. Standardized reporting can offer improved clarity and efficiency of communication among radiologists and the multidisciplinary team. However, challenges include limited flexibility, initially increased time and effort, and potential user experience issues. The efforts toward standardization are examined, encompassing the establishment of reporting templates, use of common imaging lexicons, and integration of clinical decision support tools. Recent technological advancements, including multimedia-enhanced reporting and AI-driven solutions, are discussed for their potential to improve the standardization process. Organizations such as the ACR, ESUR, RSNA, and ESR have developed standardized reporting systems, templates, and platforms to promote uniformity and collaboration. However, challenges remain in terms of workflow adjustments, language and format variability, and the need for validation. The review concludes by presenting a set of ten essential rules for creating standardized radiology reports, emphasizing clarity, consistency, and adherence to structured formats.
Collapse
|
6
|
Meningioma Presenting With Intratumoral Hemorrhage on Active Surveillance. Cureus 2023; 15:e41787. [PMID: 37575809 PMCID: PMC10421599 DOI: 10.7759/cureus.41787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/28/2023] [Indexed: 08/15/2023] Open
Abstract
Meningiomas are relatively common primary adult brain tumors. They are slow-growing, highly vascular, and graded according to histology, phenotypic and genotypic features. We present a case of a 66-year-old male with a history of tongue squamous cell carcinoma, which presented multiple risk factors for cardiovascular and thromboembolic events. A brain lesion was initially detected on a computed tomography (CT) scan and later characterized by magnetic resonance imaging (MRI). The multidisciplinary team decided to maintain surveillance due to the lack of associated symptoms. Upon expansion in size and acute intralesional hemorrhage seen on follow-up imaging, the patient was submitted to surgical excision. The histopathological testing determined it to be an atypical meningioma. Two months later, the patient received stereotactic radiotherapy, and a post-surgical MRI showed no evidence of tumor recurrence. This case report describes a rare occurrence of intratumoral hemorrhage in a meningioma during surveillance, highlighting the importance of vigilant monitoring and consideration of potential risk factors for hemorrhagic events.
Collapse
|
7
|
Weakly supervised spatial relation extraction from radiology reports. JAMIA Open 2023; 6:ooad027. [PMID: 37096148 PMCID: PMC10122604 DOI: 10.1093/jamiaopen/ooad027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 03/16/2023] [Accepted: 04/04/2023] [Indexed: 04/26/2023] Open
Abstract
Objective Weak supervision holds significant promise to improve clinical natural language processing by leveraging domain resources and expertise instead of large manually annotated datasets alone. Here, our objective is to evaluate a weak supervision approach to extract spatial information from radiology reports. Materials and Methods Our weak supervision approach is based on data programming that uses rules (or labeling functions) relying on domain-specific dictionaries and radiology language characteristics to generate weak labels. The labels correspond to different spatial relations that are critical to understanding radiology reports. These weak labels are then used to fine-tune a pretrained Bidirectional Encoder Representations from Transformers (BERT) model. Results Our weakly supervised BERT model provided satisfactory results in extracting spatial relations without manual annotations for training (spatial trigger F1: 72.89, relation F1: 52.47). When this model is further fine-tuned on manual annotations (relation F1: 68.76), performance surpasses the fully supervised state-of-the-art. Discussion To our knowledge, this is the first work to automatically create detailed weak labels corresponding to radiological information of clinical significance. Our data programming approach is (1) adaptable as the labeling functions can be updated with relatively little manual effort to incorporate more variations in radiology language reporting formats and (2) generalizable as these functions can be applied across multiple radiology subdomains in most cases. Conclusions We demonstrate a weakly supervision model performs sufficiently well in identifying a variety of relations from radiology text without manual annotations, while exceeding state-of-the-art results when annotated data are available.
Collapse
|
8
|
ChatGPT From the Perspective of an Academic Oral and Maxillofacial Radiologist. Cureus 2023; 15:e40053. [PMID: 37425514 PMCID: PMC10325627 DOI: 10.7759/cureus.40053] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/05/2023] [Indexed: 07/11/2023] Open
Abstract
Chat Generative Pre-Trained Transformer (ChatGPT) is an open artificial intelligence (AI)-powered chatbot with various clinical and academic dentistry applications, including oral and maxillofacial radiology (OMFR). The applications can be extended to generating documents such as oral radiology reports if appropriate prompts are given. There are various challenges associated with this task. Like other fields, ChatGPT can be incorporated to generate content and answer oral radiology-related multiple-choice questions. However, its performance is limited to answering image-based questions. ChatGPT can help in scientific writing but can not be designated as an author due to the lack of validity of the content. This editorial outlines the potential applications and limitations of the current version of ChatGPT in OMFR academic settings.
Collapse
|
9
|
Deep Learning Approach for Negation and Speculation Detection for Automated Important Finding Flagging and Extraction in Radiology Report: Internal Validation and Technique Comparison Study. JMIR Med Inform 2023; 11:e46348. [PMID: 37097731 PMCID: PMC10170361 DOI: 10.2196/46348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 03/21/2023] [Accepted: 03/24/2023] [Indexed: 04/26/2023] Open
Abstract
BACKGROUND Negation and speculation unrelated to abnormal findings can lead to false-positive alarms for automatic radiology report highlighting or flagging by laboratory information systems. OBJECTIVE This internal validation study evaluated the performance of natural language processing methods (NegEx, NegBio, NegBERT, and transformers). METHODS We annotated all negative and speculative statements unrelated to abnormal findings in reports. In experiment 1, we fine-tuned several transformer models (ALBERT [A Lite Bidirectional Encoder Representations from Transformers], BERT [Bidirectional Encoder Representations from Transformers], DeBERTa [Decoding-Enhanced BERT With Disentangled Attention], DistilBERT [Distilled version of BERT], ELECTRA [Efficiently Learning an Encoder That Classifies Token Replacements Accurately], ERNIE [Enhanced Representation through Knowledge Integration], RoBERTa [Robustly Optimized BERT Pretraining Approach], SpanBERT, and XLNet) and compared their performance using precision, recall, accuracy, and F1-scores. In experiment 2, we compared the best model from experiment 1 with 3 established negation and speculation-detection algorithms (NegEx, NegBio, and NegBERT). RESULTS Our study collected 6000 radiology reports from 3 branches of the Chi Mei Hospital, covering multiple imaging modalities and body parts. A total of 15.01% (105,755/704,512) of words and 39.45% (4529/11,480) of important diagnostic keywords occurred in negative or speculative statements unrelated to abnormal findings. In experiment 1, all models achieved an accuracy of >0.98 and F1-score of >0.90 on the test data set. ALBERT exhibited the best performance (accuracy=0.991; F1-score=0.958). In experiment 2, ALBERT outperformed the optimized NegEx, NegBio, and NegBERT methods in terms of overall performance (accuracy=0.996; F1-score=0.991), in the prediction of whether diagnostic keywords occur in speculative statements unrelated to abnormal findings, and in the improvement of the performance of keyword extraction (accuracy=0.996; F1-score=0.997). CONCLUSIONS The ALBERT deep learning method showed the best performance. Our results represent a significant advancement in the clinical applications of computer-aided notification systems.
Collapse
|
10
|
A scoping review of natural language processing of radiology reports in breast cancer. Front Oncol 2023; 13:1160167. [PMID: 37124523 PMCID: PMC10130381 DOI: 10.3389/fonc.2023.1160167] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 03/28/2023] [Indexed: 05/02/2023] Open
Abstract
Various natural language processing (NLP) algorithms have been applied in the literature to analyze radiology reports pertaining to the diagnosis and subsequent care of cancer patients. Applications of this technology include cohort selection for clinical trials, population of large-scale data registries, and quality improvement in radiology workflows including mammography screening. This scoping review is the first to examine such applications in the specific context of breast cancer. Out of 210 identified articles initially, 44 met our inclusion criteria for this review. Extracted data elements included both clinical and technical details of studies that developed or evaluated NLP algorithms applied to free-text radiology reports of breast cancer. Our review illustrates an emphasis on applications in diagnostic and screening processes over treatment or therapeutic applications and describes growth in deep learning and transfer learning approaches in recent years, although rule-based approaches continue to be useful. Furthermore, we observe increased efforts in code and software sharing but not with data sharing.
Collapse
|
11
|
Proposed Questions to Assess the Extent of Knowledge in Understanding the Radiology Report Language. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11808. [PMID: 36142078 PMCID: PMC9517641 DOI: 10.3390/ijerph191811808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 09/15/2022] [Accepted: 09/16/2022] [Indexed: 06/16/2023]
Abstract
Radiotherapy and diagnostic imaging play a significant role in medical care. The amount of patient participation and communication can be increased by helping patients understand radiology reports. There is insufficient information on how to measure a patient's knowledge of a written radiology report. The goal of this study is to design a tool that will measure patient literacy of radiology reports. A radiological literacy tool was created and evaluated as part of the project. There were two groups of patients: control and intervention. A sample radiological report was provided to each group for reading. After reading the report, the groups were quizzed to see how well they understood the report. The participants answered the questions and the correlation between the understanding of the radiology report and the radiology report literacy questions was calculated. The correlations between radiology report literacy questions and radiology report understanding for the intervention and control groups were 0.522, p < 0.001, and 0.536, p < 0.001, respectively. Our radiology literacy tool demonstrated a good ability to measure the awareness of radiology report understanding (area under the receiver operator curve in control group (95% CI: 0.77 (0.71-0.81)) and intervention group (95% CI: 0.79 (0.74-0.84))). We successfully designed a tool that can measure the radiology literacy of patients. This tool is one of the first to measure the level of patient knowledge in the field of radiology understanding.
Collapse
|
12
|
Optimizing the Breast Imaging Report for Today and Tomorrow. JOURNAL OF BREAST IMAGING 2022; 4:343-345. [PMID: 38416981 DOI: 10.1093/jbi/wbac033] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Indexed: 03/01/2024]
|
13
|
Structured Reporting in Radiology: what do radiologists think and does RANZCR have a role in implementation. J Med Imaging Radiat Oncol 2022; 66:193-201. [PMID: 35243789 DOI: 10.1111/1754-9485.13362] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 11/18/2021] [Indexed: 11/28/2022]
Abstract
INTRODUCTION The Royal Australian and New Zealand College of Radiologists (RANZCR) established a working group to explore how the college should engage with the future development of structured radiology reporting in our region, particularly in the context of a broader move to digital healthcare. Phase 1 of the project surveyed college members and affiliated interest groups about how they are using structured reporting currently and might like it to evolve. METHODS Member and interest group questionnaires were based on previously published studies and posted to the Survey Monkey platform. Responses were analysed descriptively. RESULTS There were 114 members and 58 affiliated group responses. There is clearest support for RANZCR developing guidelines around structured report quality, for improvements in report content, particularly tailoring to clinical context and study parameters, and for improved integration of structured reporting and RIS/PACS systems. CONCLUSIONS Phase 2 of the structured reporting working group project will aim to develop guidelines for structured report quality and processes through which RANZCR can implement them.
Collapse
|
14
|
The prevalence and spectrum of reported incidental adrenal abnormalities in abdominal computed tomography of cancer patients: The experience of a comprehensive cancer center. Front Endocrinol (Lausanne) 2022; 13:1023220. [PMID: 36457558 PMCID: PMC9706394 DOI: 10.3389/fendo.2022.1023220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 10/19/2022] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND The increasing use of computed tomography (CT) has identified many patients with incidental adrenal lesions. Further evaluation of these lesions is often dependent on the language used in the radiology report. Compared to the general population, patients with cancer have a higher risk for adrenal abnormalities, yet data on the prevalence and type of incidental adrenal lesions reported on radiologic reports in cancer patients is limited. In this study, we aimed to determine the prevalence and nature of adrenal abnormalities as an incidental finding reported on radiology reports of cancer patients evaluated for reasons other than suspected adrenal pathology. METHODS Radiology reports of patients who underwent abdominal CT within 30 days of presentation to a tertiary cancer center were reviewed and analyzed. We used natural language processing to perform a multi-class text classification of the adrenal reports. Patients who had CT for suspected adrenal mass including adrenal protocol CT were excluded. Three independent abstractors manually reviewed abnormal and questionable results, and we measured the interobserver agreement. RESULTS From June 1, 2006, to October 1, 2017, a total of 600,399 abdominal CT scans were performed including 66,478 scans obtained within 30 days of the patient's first presentation. Of these, 58,512 were eligible after applying the exclusion criteria. Adrenal abnormalities were identified in 7,817 (13.4%) reports, with adrenal nodularity (3,401 [43.5%]), adenomas (1,733 [22.2%]), and metastases (1,337 [17.1%]) being the most reported categories. Only 10 cases (0.1%) were reported as primary adrenal carcinomas and 2 as pheochromocytoma. Interobserver agreement using 300 reports yielded a Fleiss kappa of 0.893, implying almost perfect agreement between the abstractors. CONCLUSIONS Incidental adrenal abnormalities are commonly reported in abdominal CT reports of cancer patients. As the terminology used by radiologists to describe these findings greatly determine the subsequent management plans, further studies are needed to correlate some of these findings to the actual confirmed diagnosis based on hormonal, histological and follow-up data and ascertain the impact of such reported findings on patients' outcomes.
Collapse
|
15
|
Structured Reporting of Computed Tomography and Magnetic Resonance in the Staging of Pancreatic Adenocarcinoma: A Delphi Consensus Proposal. Diagnostics (Basel) 2021; 11:diagnostics11112033. [PMID: 34829384 PMCID: PMC8621603 DOI: 10.3390/diagnostics11112033] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 10/31/2021] [Accepted: 11/01/2021] [Indexed: 12/12/2022] Open
Abstract
Background: Structured reporting (SR) in radiology has been recognized recently by major scientific societies. This study aims to build structured computed tomography (CT) and magnetic resonance (MR)-based reports in pancreatic adenocarcinoma during the staging phase in order to improve communication between the radiologist and members of multidisciplinary teams. Materials and Methods: A panel of expert radiologists, members of the Italian Society of Medical and Interventional Radiology, was established. A modified Delphi process was used to develop the CT-SR and MRI-SR, assessing a level of agreement for all report sections. Cronbach’s alpha (Cα) correlation coefficient was used to assess internal consistency for each section and to measure quality analysis according to the average inter-item correlation. Results: The final CT-SR version was built by including n = 16 items in the “Patient Clinical Data” section, n = 11 items in the “Clinical Evaluation” section, n = 7 items in the “Imaging Protocol” section, and n = 18 items in the “Report” section. Overall, 52 items were included in the final version of the CT-SR. The final MRI-SR version was built by including n = 16 items in the “Patient Clinical Data” section, n = 11 items in the “Clinical Evaluation” section, n = 8 items in the “Imaging Protocol” section, and n = 14 items in the “Report” section. Overall, 49 items were included in the final version of the MRI-SR. In the first round for CT-SR, all sections received more than a good rating. The overall mean score of the experts was 4.85. The Cα correlation coefficient was 0.85. In the second round, the overall mean score of the experts was 4.87, and the Cα correlation coefficient was 0.94. In the first round, for MRI-SR, all sections received more than a good rating. The overall mean score of the experts was 4.73. The Cα correlation coefficient was 0.82. In the second round, the overall mean score of the experts was 4.91, and the Cα correlation coefficient was 0.93. Conclusions: The CT-SR and MRI-SR are based on a multi-round consensus-building Delphi exercise derived from the multidisciplinary agreement of expert radiologists in order to obtain more appropriate communication tools for referring physicians.
Collapse
|
16
|
Computed Tomography Structured Reporting in the Staging of Lymphoma: A Delphi Consensus Proposal. J Clin Med 2021; 10:jcm10174007. [PMID: 34501455 PMCID: PMC8432477 DOI: 10.3390/jcm10174007] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 08/30/2021] [Accepted: 08/31/2021] [Indexed: 12/17/2022] Open
Abstract
Structured reporting (SR) in radiology is becoming increasingly necessary and has been recognized recently by major scientific societies. This study aims to build structured CT-based reports for lymphoma patients during the staging phase to improve communication between radiologists, members of multidisciplinary teams, and patients. A panel of expert radiologists, members of the Italian Society of Medical and Interventional Radiology (SIRM), was established. A modified Delphi process was used to develop the SR and to assess a level of agreement for all report sections. The Cronbach's alpha (Cα) correlation coefficient was used to assess internal consistency for each section and to measure quality analysis according to the average inter-item correlation. The final SR version was divided into four sections: (a) Patient Clinical Data, (b) Clinical Evaluation, (c) Imaging Protocol, and (d) Report, including n = 13 items in the "Patient Clinical Data" section, n = 8 items in the "Clinical Evaluation" section, n = 9 items in the "Imaging Protocol" section, and n = 32 items in the "Report" section. Overall, 62 items were included in the final version of the SR. A dedicated section of significant images was added as part of the report. In the first Delphi round, all sections received more than a good rating (≥3). The overall mean score of the experts and the sum of score for structured report were 4.4 (range 1-5) and 1524 (mean value of 101.6 and standard deviation of 11.8). The Cα correlation coefficient was 0.89 in the first round. In the second Delphi round, all sections received more than an excellent rating (≥4). The overall mean score of the experts and the sum of scores for structured report were 4.9 (range 3-5) and 1694 (mean value of 112.9 and standard deviation of 4.0). The Cα correlation coefficient was 0.87 in this round. The highest overall means value, highest sum of scores of the panelists, and smallest standard deviation values of the evaluations in this round reflect the increase of the internal consistency and agreement among experts in the second round compared to first round. The accurate statement of imaging data given to referring physicians is critical for patient care; the information contained affects both the decision-making process and the subsequent treatment. The radiology report is the most important source of clinical imaging information. It conveys critical information about the patient's health and the radiologist's interpretation of medical findings. It also communicates information to the referring physicians and records this information for future clinical and research use. The present SR was generated based on a multi-round consensus-building Delphi exercise and uses standardized terminology and structures, in order to adhere to diagnostic/therapeutic recommendations and facilitate enrolment in clinical trials, to reduce any ambiguity that may arise from non-conventional language, and to enable better communication between radiologists and clinicians.
Collapse
|
17
|
Structured Reporting of Lung Cancer Staging: A Consensus Proposal. Diagnostics (Basel) 2021; 11:diagnostics11091569. [PMID: 34573911 PMCID: PMC8465460 DOI: 10.3390/diagnostics11091569] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 08/20/2021] [Accepted: 08/27/2021] [Indexed: 11/30/2022] Open
Abstract
Background: Structured reporting (SR) in radiology is becoming necessary and has recently been recognized by major scientific societies. This study aimed to build CT-based structured reports for lung cancer during the staging phase, in order to improve communication between radiologists, members of the multidisciplinary team and patients. Materials and Methods: A panel of expert radiologists, members of the Italian Society of Medical and Interventional Radiology, was established. A modified Delphi exercise was used to build the structural report and to assess the level of agreement for all the report sections. The Cronbach’s alpha (Cα) correlation coefficient was used to assess internal consistency for each section and to perform a quality analysis according to the average inter-item correlation. Results: The final SR version was built by including 16 items in the “Patient Clinical Data” section, 4 items in the “Clinical Evaluation” section, 8 items in the “Exam Technique” section, 22 items in the “Report” section, and 5 items in the “Conclusion” section. Overall, 55 items were included in the final version of the SR. The overall mean of the scores of the experts and the sum of scores for the structured report were 4.5 (range 1–5) and 631 (mean value 67.54, STD 7.53), respectively, in the first round. The items of the structured report with higher accordance in the first round were primary lesion features, lymph nodes, metastasis and conclusions. The overall mean of the scores of the experts and the sum of scores for staging in the structured report were 4.7 (range 4–5) and 807 (mean value 70.11, STD 4.81), respectively, in the second round. The Cronbach’s alpha (Cα) correlation coefficient was 0.89 in the first round and 0.92 in the second round for staging in the structured report. Conclusions: The wide implementation of SR is critical for providing referring physicians and patients with the best quality of service, and for providing researchers with the best quality of data in the context of the big data exploitation of the available clinical data. Implementation is complex, requiring mature technology to successfully address pending user-friendliness, organizational and interoperability challenges.
Collapse
|
18
|
Patients' perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data. Health Informatics J 2021; 27:14604582211011215. [PMID: 33913359 DOI: 10.1177/14604582211011215] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Results of radiology imaging studies are not typically comprehensible to patients. With the advances in artificial intelligence (AI) technology in recent years, it is expected that AI technology can aid patients' understanding of radiology imaging data. The aim of this study is to understand patients' perceptions and acceptance of using AI technology to interpret their radiology reports. We conducted semi-structured interviews with 13 participants to elicit reflections pertaining to the use of AI technology in radiology report interpretation. A thematic analysis approach was employed to analyze the interview data. Participants have a generally positive attitude toward using AI-based systems to comprehend their radiology reports. AI is perceived to be particularly useful in seeking actionable information, confirming the doctor's opinions, and preparing for the consultation. However, we also found various concerns related to the use of AI in this context, such as cyber-security, accuracy, and lack of empathy. Our results highlight the necessity of providing AI explanations to promote people's trust and acceptance of AI. Designers of patient-centered AI systems should employ user-centered design approaches to address patients' concerns. Such systems should also be designed to promote trust and deliver concerning health results in an empathetic manner to optimize the user experience.
Collapse
|
19
|
Structured Reporting of Computed Tomography in the Staging of Neuroendocrine Neoplasms: A Delphi Consensus Proposal. Front Endocrinol (Lausanne) 2021; 12:748944. [PMID: 34917023 PMCID: PMC8670531 DOI: 10.3389/fendo.2021.748944] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/28/2021] [Accepted: 11/12/2021] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Structured reporting (SR) in radiology is becoming increasingly necessary and has been recognized recently by major scientific societies. This study aims to build structured CT-based reports in Neuroendocrine Neoplasms during the staging phase in order to improve communication between the radiologist and members of multidisciplinary teams. MATERIALS AND METHODS A panel of expert radiologists, members of the Italian Society of Medical and Interventional Radiology, was established. A Modified Delphi process was used to develop the SR and to assess a level of agreement for all report sections. Cronbach's alpha (Cα) correlation coefficient was used to assess internal consistency for each section and to measure quality analysis according to the average inter-item correlation. RESULTS The final SR version was built by including n=16 items in the "Patient Clinical Data" section, n=13 items in the "Clinical Evaluation" section, n=8 items in the "Imaging Protocol" section, and n=17 items in the "Report" section. Overall, 54 items were included in the final version of the SR. Both in the first and second round, all sections received more than a good rating: a mean value of 4.7 and range of 4.2-5.0 in the first round and a mean value 4.9 and range of 4.9-5 in the second round. In the first round, the Cα correlation coefficient was a poor 0.57: the overall mean score of the experts and the sum of scores for the structured report were 4.7 (range 1-5) and 728 (mean value 52.00 and standard deviation 2.83), respectively. In the second round, the Cα correlation coefficient was a good 0.82: the overall mean score of the experts and the sum of scores for the structured report were 4.9 (range 4-5) and 760 (mean value 54.29 and standard deviation 1.64), respectively. CONCLUSIONS The present SR, based on a multi-round consensus-building Delphi exercise following in-depth discussion between expert radiologists in gastro-enteric and oncological imaging, derived from a multidisciplinary agreement between a radiologist, medical oncologist and surgeon in order to obtain the most appropriate communication tool for referring physicians.
Collapse
|
20
|
Full Radiology Report through Patient Web Portal: A Literature Review. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17103673. [PMID: 32456099 PMCID: PMC7277373 DOI: 10.3390/ijerph17103673] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 05/20/2020] [Accepted: 05/21/2020] [Indexed: 12/23/2022]
Abstract
The aim of this study discusses the gap between the patient web portal and providing a full radiology report. A literature review was conducted to examine radiologists, physicians, and patients’ opinions and preferences of providing patients with online access radiology reports. The databases searched were Pubmed and Google Scholar and the initial search included 927 studies. After review, 47 studies were included in the study. We identified several themes, including patients’ understanding of radiology reports and radiological images, as well as the need for decreasing the turnaround time for reports availability. The existing radiology reports written for physicians are not suited for patients. Further studies are needed to guide and inform the design of patient friendly radiology reports. One of the ways that can be used to fill the gap between patients and radiology reports is using social media sites.
Collapse
|
21
|
Use of an Online Crowdsourcing Platform to Assess Patient Comprehension of Radiology Reports and Colloquialisms. AJR Am J Roentgenol 2020; 214:1316-1320. [PMID: 32208006 DOI: 10.2214/ajr.19.22202] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
OBJECTIVE. The purpose of this study was to use an online crowdsourcing platform to assess patient comprehension of five radiology reporting templates and radiology colloquialisms. MATERIALS AND METHODS. In this cross-sectional study, participants were surveyed as patient surrogates using a crowdsourcing platform. Two tasks were completed within two 48-hour time periods. For the first crowdsourcing task, each participant was randomly assigned a set of radiology reports in a constructed reporting template and subsequently tested for comprehension. For the second crowdsourcing task, each participant was randomly assigned a radiology colloquialism and asked to indicate whether the phrase indicated a normal, abnormal, or ambivalent finding. RESULTS. A total of 203 participants enrolled for the first task and 1166 for the second within 48 hours of task publication. The payment totaled $31.96. Of 812 radiology reports read, 384 (47%) were correctly interpreted by the patient surrogates. Patient surrogates had higher rates of comprehension of reports written in the patient summary (57%, p < 0.001) and traditional unstructured in combination with patient summary (51%, p = 0.004) formats than in the traditional unstructured format (40%). Most of the patient surrogates (114/203 [56%]) expressed a preference for receiving a full radiology report via an electronic patient portal. Several radiology colloquialisms with modifiers such as "low," "underdistended," and "decompressed" had low rates of comprehension. CONCLUSION. Use of the crowdsourcing platform is an expeditious, cost-effective, and customizable tool for surveying laypeople in sentiment- or task-based research. Patient summaries can help increase patient comprehension of radiology reports. Radiology colloquialisms are likely to be misunderstood by patients.
Collapse
|
22
|
Difficulties and possibilities in communication between referring clinicians and radiologists: perspective of clinicians. J Multidiscip Healthc 2019; 12:555-564. [PMID: 31410014 PMCID: PMC6650448 DOI: 10.2147/jmdh.s207649] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 05/22/2019] [Indexed: 11/23/2022] Open
Abstract
Purpose To investigate modes and quality of interprofessional communication between clinicians and radiologists, and to identify difficulties and possibilities in this context, as experienced by referring clinicians. Patients and methods Focus group interviews with 22 clinicians from different specialties were carried out. The leading question was: "How do you experience communication, verbal and nonverbal, between referring clinicians and radiologists?" Content analysis was used for interpretation of data. Results Overall, referring clinicians expressed satisfaction with their interprofessional communication with radiologists, and digital access to image data was highly appreciated. However, increased reliance on digital communication has led to reduced face-to-face contacts between clinicians and radiologists. This seems to constitute a potential threat to bilateral feedback, joint educational opportunities, and interprofessional development. Cumbersome medical information software systems, time constraints, shortage of staff, reliance on teleradiology, and lack of uniform format of radiology reports were mentioned as problematic. Further implementation of structured reporting was considered beneficial. Conclusion Deepened face-to-face contacts between clinicians and radiologists were considered prerequisites for mutual understanding, deepened competence and mutual trust; a key factor in interprofessional communication. Clinicians and radiologists should come together in order to secure bilateral feedback and obtain deepened knowledge of the specific needs of subspecialized clinicians.
Collapse
|
23
|
Use of Machine Learning to Identify Follow-Up Recommendations in Radiology Reports. J Am Coll Radiol 2019; 16:336-343. [PMID: 30600162 PMCID: PMC7534384 DOI: 10.1016/j.jacr.2018.10.020] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Revised: 10/22/2018] [Accepted: 10/25/2018] [Indexed: 12/17/2022]
Abstract
PURPOSE The aims of this study were to assess follow-up recommendations in radiology reports, develop and assess traditional machine learning (TML) and deep learning (DL) models in identifying follow-up, and benchmark them against a natural language processing (NLP) system. METHODS This HIPAA-compliant, institutional review board-approved study was performed at an academic medical center generating >500,000 radiology reports annually. One thousand randomly selected ultrasound, radiography, CT, and MRI reports generated in 2016 were manually reviewed and annotated for follow-up recommendations. TML (support vector machines, random forest, logistic regression) and DL (recurrent neural nets) algorithms were constructed and trained on 850 reports (training data), with subsequent optimization of model architectures and parameters. Precision, recall, and F1 score were calculated on the remaining 150 reports (test data). A previously developed and validated NLP system (iSCOUT) was also applied to the test data, with equivalent metrics calculated. RESULTS Follow-up recommendations were present in 12.7% of reports. The TML algorithms achieved F1 scores of 0.75 (random forest), 0.83 (logistic regression), and 0.85 (support vector machine) on the test data. DL recurrent neural nets had an F1 score of 0.71; iSCOUT also had an F1 score of 0.71. Performance of both TML and DL methods by F1 scores appeared to plateau after 500 to 700 samples while training. CONCLUSIONS TML and DL are feasible methods to identify follow-up recommendations. These methods have great potential for near real-time monitoring of follow-up recommendations in radiology reports.
Collapse
|
24
|
Incomplete Thyroid Ultrasound Reports for Patients With Thyroid Nodules: Implications Regarding Risk Assessment and Management. AJR Am J Roentgenol 2018; 211:1348-1353. [PMID: 30332287 DOI: 10.2214/ajr.18.20056] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
OBJECTIVE The purpose of this study was to determine the completeness of thyroid ultrasound (US) reports, assess for differences in report interpretation by clinicians, and evaluate for implications in patient care. MATERIALS AND METHODS We retrospectively reviewed thyroid US examinations performed between January and June 2013 in Nova Scotia, Canada. Baseline examinations that identified a nodule were evaluated for 10 reporting elements. Reports that lacked a comment regarding malignancy risk or a recommendation for biopsy were considered unclassified and were graded by three clinical specialists in accordance with the 2015 American Thyroid Association management guidelines. Interrater agreement was assessed using the Cohen kappa statistic. A radiologist reviewed the images of unclassified nodules, and on the basis of radiologic grading, biopsy rates and pathologic findings were compared between nodules that did and did not warrant biopsy. RESULTS Of 971 first-time thyroid US studies, 478 detected a nodule. The number of reports lacking a comment on the 10 elements ranged from 154 to 433 (32-91%). A total of 222 nodules (46%) were unclassified, and agreement in assigned grading by the clinical specialists was very poor (κ = 0.07; p < 0.05). According to radiologist grading, only 57 of 127 biopsies were performed on nodules that warranted biopsy, and 16 of 95 biopsies were performed unnecessarily. On the basis of the three clinical specialists' interpretation, 10, 31, and 33 reports were considered too incomplete to assign a grade; 40, 10, and four biopsies would have been unnecessarily ordered; and zero, three, and four cancers would have been missed. CONCLUSION There is widespread underreporting of established elements in thyroid US reports, and this causes confusion and discrepancy among clinical specialists regarding the risk of malignancy and the need for biopsy.
Collapse
|
25
|
Automatic Annotation Tool to Support Supervised Machine Learning for Scaphoid Fracture Detection. Stud Health Technol Inform 2018; 255:210-214. [PMID: 30306938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The aim of this work is to develop and validate an automatic annotation tool for the detection and bone localization of scaphoid fractures in radiology reports. To achieve this goal, a rule-based method using a Natural Language Processing (NLP) tool was applied. Finite state automata were constructed to detect, classify and annotate reports. An evaluation of the method on a manually annotated dataset has shown 96,8% of total match.
Collapse
|
26
|
Specialized second-opinion radiology review of PET/CT examinations for patients with diffuse large B-cell lymphoma impacts patient care and management. Medicine (Baltimore) 2017; 96:e9411. [PMID: 29390562 PMCID: PMC5758264 DOI: 10.1097/md.0000000000009411] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
To identify discrepancies in fludeoxyglucose positron emission tomography/computed tomography (FDG PET/CT) reports generated by general radiologists and subspecialized oncological radiologists for patients with diffuse large B-cell lymphoma (DLBCL), and to assess if such discrepancies impact patient management.Two radiologists retrospectively reviewed 72 PET/CT scans of patients with DLBCL referred to our institutions between 2009 and 2011, and recorded the discrepancies between the outside and second-opinion reports regarding multiple preset criteria using kappa statistic (Κ), including the disease stage. A multidisciplinary staging that considered all patient clinical data, pathology, and follow up radiological scans, was considered as standard of reference. A hemato-oncologist, blinded to the reports' origin, subjectively graded the quality and structure of these reports for each patient to determine if clinical stage and disease activity could be derived accurately from these reports.Agreement was not, or slightly, achieved between the reports regarding the binary and multilevel criteria (Κ < 0-0.2 and weighted Κ = 0.082, respectively). Second-opinion reviews of PET/CT scans were concordant with the multidisciplinary staging in 78% of cases with an almost perfect agreement (Κ = 0.860). A change in staging was demonstrated in 36% of cases. In addition, 68% of second-opinion reports were assigned the highest grades on quality (grades 4 and 5) by the hemato-oncologist, compared with 15% of outside reports, with no noted agreement (weighted Κ = -0.007).Second-opinion review of PET/CT scans by sub-specialized oncological radiologists increases accuracy of initial staging, posttreatment evaluation and also the clinical relevance of the radiology reports.
Collapse
|
27
|
Abstract
OBJECTIVE The purposes of this article are to explore the issue of diagnostic uncertainty in radiology and how the radiology report has often fallen short in this regard and to suggest approaches that can be helpful in addressing this challenge. CONCLUSION The practice of medicine involves a great deal of uncertainty, which is an uncomfortable reality for most physicians. Radiologists are more often than not faced with considerable diagnostic uncertainty and in their written reports are challenged to effectively communicate that uncertainty to referring physicians and others.
Collapse
|
28
|
Addition of the Fleischner Society Guidelines to Chest CT Examination Interpretive Reports Improves Adherence to Recommended Follow-up Care for Incidental Pulmonary Nodules. Acad Radiol 2017; 24:337-344. [PMID: 27793580 DOI: 10.1016/j.acra.2016.08.026] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2016] [Revised: 08/25/2016] [Accepted: 08/29/2016] [Indexed: 12/21/2022]
Abstract
RATIONALE AND OBJECTIVES The study aimed to determine whether the addition of the Fleischner Society guidelines to chest computed tomography (CT) reports identifying incidental pulmonary nodules affects follow-up care. PATIENTS AND METHODS Beginning in 2008, a template containing the Fleischner Society guidelines was added at the interpreting radiologist's discretion to chest CT reports describing incidental solid pulmonary nodules at our institution. The records of all medical centers in Olmsted county were used to capture the complete medical history of local patients >35 years old diagnosed with a pulmonary nodule from April 1, 2008 to October 1, 2011. Patients with a history of cancer or previously diagnosed nodule, or who died before follow-up, were excluded. Patients were categorized according to whether they did ("template group") or did not ("control group") have the template added. Nodule size and smoking history were used to determine recommended follow-up care. Differences in follow-up were compared between groups using Pearson's chi-square test. RESULTS A total of 510 patients (276 in the template group, 234 in the control group) were included in the study. Only 198 patients (39%) received their recommended follow-up care. Template group patients were significantly more likely to receive recommended follow-up care compared to control group patients (45% vs 31%, P = .0014). Most patients whose management did not adhere to Fleischner Society guidelines did not receive a recommended follow-up chest CT (210 out of 312, 67%). CONCLUSIONS The addition of the Fleischner Society guidelines to chest CT reports significantly increases the likelihood of receiving recommended follow-up care for patients with incidental pulmonary nodules. Additional education is needed to improve appropriate guideline utilization by radiologists and adherence by ordering providers.
Collapse
|
29
|
Abstract
Background The availability of clinical information and a pertinent clinical question can improve the diagnostic accuracy of the imaging process. Purpose To examine if an electronic request form forcing referring clinicians to provide separate input of both clinical information and a clinical question can improve the quality of the request. Material and Methods A total of 607 request forms in the clinical worklists for a computed tomography (CT) scan of the thorax, the abdomen or their combination, were examined. Using software of our own making, we examined the presence of clinical information and a clinical question before and after the introduction of a new, more compelling order method. We scored and compared the quality of the clinical information and the clinical question between the two systems and we examined the effect on productivity. Results Both clinical information and a clinical question were present in 76.7% of cases under the old system and in 95.3% under the new system ( P < 0.001). Individual characteristics of the clinical information and the clinical question however, with the exception of incompleteness, showed little improvement under the new system. There was also no significant difference between the two systems in the number of requests requiring further search. Conclusion The introduction of electronic radiology request forms compelling referring clinicians to provide separate input of clinical information and a clinical question provides only limited benefit to the quality of the request. Raising awareness among clinicians of the importance of a well-written request remains essential.
Collapse
|
30
|
Syntactic and semantic errors in radiology reports associated with speech recognition software. Health Informatics J 2016; 23:3-13. [PMID: 26635322 DOI: 10.1177/1460458215613614] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors ( p < .001). Proportion of errors and fraction of material errors varied significantly among radiologists and between imaging subspecialties ( p < .001). Errors were more common in cross-sectional reports, reports reinterpreting results of outside examinations, and procedural studies (all p < .001). Error rate decreased over time ( p < .001), which suggests that a quality control program with regular feedback may reduce errors.
Collapse
|
31
|
Reporting of central airway obstruction on radiology reports and impact on bronchoscopic airway interventions and patient outcomes. Ther Adv Respir Dis 2015; 10:105-12. [PMID: 26644260 DOI: 10.1177/1753465815620111] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
BACKGROUND Central airway obstruction (CAO) is a serious condition that affects patients with both benign and malignant diseases. Timely recognition of CAO is crucial for prompt intervention aimed at improving the symptoms and quality of life of these patients. The aim of this study is to evaluate the formal radiology reporting of CAO and its impact on patients' outcomes. METHODS The medical records of patients who underwent advanced therapeutic bronchoscopy for CAO from August 2013 to September 2014 were retrospectively reviewed. Three researchers each reviewed 14 of the 42 formal radiology reports that were performed at 16 different medical and radiology centers.Patient characteristics were reported as means, medians, and standard deviations for continuous variables, and as frequencies and relative frequencies for categorical variables. RESULTS Out of 42 patients who underwent advanced bronchoscopy for planned therapeutic intervention, only 30 had radiology and pulmonology concordance about the airway findings of CAO. This is an agreement rate of 71.4% [95% confidence interval (CI): 56.7-83.3%] or a disagreement rate of 28.6% (95% CI: 16.7-43.3%). The radiology reports did not mention 31% of CAO on CT scans. The median time from CT imaging to bronchoscopy was significantly longer in patients with CAO not reported by the radiologists (21 versus 10 days; p = 0.011). Most patients improved postoperatively with no significant difference between the two groups. CONCLUSIONS Findings of CAOs were not described in a significant proportion of radiology reports. This results in significant delay in bronchoscopic airway management.
Collapse
|
32
|
Natural Language Processing Techniques for Extracting and Categorizing Finding Measurements in Narrative Radiology Reports. Appl Clin Inform 2015; 6:600-110. [PMID: 26448801 DOI: 10.4338/aci-2014-11-ra-0110] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Accepted: 07/31/2015] [Indexed: 11/23/2022] Open
Abstract
BACKGROUND Accumulating quantitative outcome parameters may contribute to constructing a healthcare organization in which outcomes of clinical procedures are reproducible and predictable. In imaging studies, measurements are the principal category of quantitative para meters. OBJECTIVES The purpose of this work is to develop and evaluate two natural language processing engines that extract finding and organ measurements from narrative radiology reports and to categorize extracted measurements by their "temporality". METHODS The measurement extraction engine is developed as a set of regular expressions. The engine was evaluated against a manually created ground truth. Automated categorization of measurement temporality is defined as a machine learning problem. A ground truth was manually developed based on a corpus of radiology reports. A maximum entropy model was created using features that characterize the measurement itself and its narrative context. The model was evaluated in a ten-fold cross validation protocol. RESULTS The measurement extraction engine has precision 0.994 and recall 0.991. Accuracy of the measurement classification engine is 0.960. CONCLUSIONS The work contributes to machine understanding of radiology reports and may find application in software applications that process medical data.
Collapse
|
33
|
Abstract
BACKGROUND Examination requests and imaging reports are the most important communication instruments between clinicians and radiologists. An accurate and clear report helps referring physicians make care decisions for their patients. PURPOSE To evaluate the contents of initial and re-reported chest reports, assess the inter-observer agreement, and evaluate the clarity of the report contents from the viewpoint of the referring physicians. MATERIAL AND METHODS The content and agreement of the reports were analyzed by comparing the initial reports with re-reports prepared by a chest radiologist. The referring physicians evaluated the contents of 50 reports regarding their medical facts, clarity, and intelligibility. The results were analyzed using cross-over tables, the Pearson Chi-Square, and kappa statistics. RESULTS Radiologists mostly addressed the questions posed by the referring physicians. General radiologists included separate conclusions in their reports more frequently (22%) than the chest radiologist in her re-reports. Reports prepared by the chest radiologist contained nearly 50% more findings than the general radiologists' reports. Inter-observer agreement between the initial and specialist re-reported reports was 66%, but the kappa value was 0.31. The reports were considered clear/intelligible by the referring physicians in 68% of the initial reports by the general radiologists and in 94% of the re-reported studies by the chest radiologist. CONCLUSION Radiology report quality was rather high despite their contents varying depending on the radiologist. Inter-observer agreement of the chest radiographs was low due to the non-structured reports containing different quantities of information, thus complicating the comparison. Referring physicians considered both short and long radiology reports to be clear.
Collapse
|
34
|
Abstract
OBJECTIVE To identify and describe general practitioners' (GPs') views on radiology reports, using plain radiography for back pain as the case. DESIGN Qualitative study with three focus-group interviews analysed using Giorgi's method as modified by Malterud. SETTING Southern Norway. SUBJECTS Five female and eight male GPs aged 32-57 years who had practised for 3-15 years and were from 11 different practices. MAIN OUTCOME MEASURES Descriptions of GPs' views. RESULTS GPs wanted radiology reports to indicate more clearly the meaning of radiological terminology, the likelihood of disease, the clinical relevance of the findings, and/or the need for further investigations. GPs stated that good referral information leads to better reports. CONCLUSION These results can help to improve communication between radiologists and GPs. The issues identified in this study could be further investigated in studies that can quantify GPs' satisfaction with radiology reports in relation to characteristics of the GP, the radiologist, and the referral information.
Collapse
|