1
|
Determining the applicability of the RSNA radiology lexicon (RadLex) in high-grade glioma MRI reporting-a preliminary study on 20 consecutive cases with newly diagnosed glioblastoma. BMC Med Imaging 2022; 22:53. [PMID: 35331160 PMCID: PMC8944106 DOI: 10.1186/s12880-022-00776-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 03/13/2022] [Indexed: 11/30/2022] Open
Abstract
Background The implementation of a collective terminology in radiological reporting such as the RSNA radiological lexicon (RadLex) yields many benefits including unambiguous communication of findings, improved education, and fostering data mining for research purposes. While some fields in general radiology have already been evaluated so far, this is the first exploratory approach to assess the applicability of the RadLex terminology to glioblastoma (GBM) MRI reporting.
Methods Preoperative brain MRI reports of 20 consecutive patients with newly diagnosed GBM (mean age 68.4 ± 10.8 years; 12 males) between January and October 2010 were retrospectively identified. All terms related to the tumor as well as their frequencies of mention were extracted from the MRI reports by two independent neuroradiologists. Every item was subsequently analyzed with respect to an equivalent RadLex representation and classified into one of four groups as follows: 1. verbatim RadLex entity, 2. synonymous/multiple equivalent(s), 3. combination of RadLex concepts, or 4. no RadLex equivalent. Additionally, verbatim entities were categorized using the hierarchical RadLex Tree Browser. Results A total of 160 radiological terms were gathered. 123/160 (76.9%) items showed literal RadLex equivalents, 9/160 (5.6%) items had synonymous (non-verbatim) or multiple counterparts, 21/160 (13.1%) items were represented by means of a combination of concepts, and 7/160 (4.4%) entities could not eventually be transferred adequately into the RadLex ontology. Conclusions Our results suggest a sufficient term coverage of the RadLex terminology for GBM MRI reporting. If applied extensively, it may improve communication of radiological findings and facilitate data mining for large-scale research purposes. Supplementary Information The online version contains supplementary material available at 10.1186/s12880-022-00776-8.
Collapse
|
2
|
Loveymi S, Dezfoulian MH, Mansoorizadeh M. Generate Structured Radiology Report from CT Images Using Image Annotation Techniques: Preliminary Results with Liver CT. J Digit Imaging 2019; 33:375-390. [PMID: 31728804 DOI: 10.1007/s10278-019-00298-w] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
A medical annotation system for radiology images extracts clinically useful information from the images, allowing the machines to infer useful abstract semantics and become capable of automatic reasoning and making diagnostic decision. It also supplies human-interpretable explanation for the images. We have implemented a computerized framework that, given a liver CT image, predicts radiological annotations with high accuracy, in order to generate a structured report, which includes predicting very specific high-level semantic content. Each report of a liver CT image is related to different inhomogeneous parts like the liver, lesion, and vessel. We put forward a claim that gathering all kinds of features is not suitable for filling all parts of the report. As a matter of fact, for each group of annotations, one should find and extract the best feature that results in the best answers for that specific annotation. To this end, the main challenge is discovering the relationships between these specific semantic concepts and their association with the low-level image features. Our framework was implemented by combining a set of the state-of-the-art low-level imaging features. In addition, we propose a novel feature (DLBP (deep local binary pattern)) based on LBP that incorporates multi-slice analysis in CT images and further improves the performance. In order to model our annotation system, two methods were used, namely multi-class support vector machine (SVM) and random subspace (RS) which is an ensemble learning method. Applying this representation leads to a high prediction accuracy of 93.1% despite its relatively low dimension in comparison with the existing works.
Collapse
Affiliation(s)
- Samira Loveymi
- Computer Engineering Department, Bu-Ali Sina University, Shahid Fahmideh blvd., Hamedan, Iran
| | - Mir Hossein Dezfoulian
- Computer Engineering Department, Bu-Ali Sina University, Shahid Fahmideh blvd., Hamedan, Iran
| | - Muharram Mansoorizadeh
- Computer Engineering Department, Bu-Ali Sina University, Shahid Fahmideh blvd., Hamedan, Iran.
| |
Collapse
|
3
|
Tsuji S, Yagahara A, Fukuda A, Nishimoto N, Tanikawa T, Kawamata M, Uchida K, Ogasawara K. [Toward Launching Electronic Terminology Services in Radiological Technology-The History and Transition of Activities for Building Standard Vocabularies in JSRT]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2019; 75:854-860. [PMID: 31434859 DOI: 10.6009/jjrt.2019_jsrt_75.8.854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Affiliation(s)
| | - Ayako Yagahara
- Faculty of Health Sciences, Hokkaido University.,Faculty of Health Sciences, Hokkaido University of Science
| | | | - Naoki Nishimoto
- Clinical Research and Medical Innovation Center, Hokkaido University Hospital
| | - Takumi Tanikawa
- Faculty of Health Sciences, Hokkaido University.,Faculty of Health Sciences, Hokkaido University of Science
| | - Minoru Kawamata
- Department of Radiology, Osaka International Cancer Institute
| | | | | |
Collapse
|
4
|
Lindman K, Rose JF, Lindvall M, Lundström C, Treanor D. Annotations, Ontologies, and Whole Slide Images - Development of an Annotated Ontology-Driven Whole Slide Image Library of Normal and Abnormal Human Tissue. J Pathol Inform 2019; 10:22. [PMID: 31523480 PMCID: PMC6669998 DOI: 10.4103/jpi.jpi_81_18] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2018] [Revised: 01/06/2019] [Indexed: 01/01/2023] Open
Abstract
Objective: Digital pathology is today a widely used technology, and the digitalization of microscopic slides into whole slide images (WSIs) allows the use of machine learning algorithms as a tool in the diagnostic process. In recent years, “deep learning” algorithms for image analysis have been applied to digital pathology with great success. The training of these algorithms requires a large volume of high-quality images and image annotations. These large image collections are a potent source of information, and to use and share the information, standardization of the content through a consistent terminology is essential. The aim of this project was to develop a pilot dataset of exhaustive annotated WSI of normal and abnormal human tissue and link the annotations to appropriate ontological information. Materials and Methods: Several biomedical ontologies and controlled vocabularies were investigated with the aim of selecting the most suitable ontology for this project. The selection criteria required an ontology that covered anatomical locations, histological subcompartments, histopathologic diagnoses, histopathologic terms, and generic terms such as normal, abnormal, and artifact. WSIs of normal and abnormal tissue from 50 colon resections and 69 skin excisions, diagnosed 2015-2016 at the Department of Clinical Pathology in Linköping, were randomly collected. These images were manually and exhaustively annotated at the level of major subcompartments, including normal or abnormal findings and artifacts. Results: Systemized nomenclature of medicine clinical terms (SNOMED CT) was chosen, and the annotations were linked to its codes and terms. Two hundred WSI were collected and annotated, resulting in 17,497 annotations, covering a total area of 302.19 cm2, equivalent to 107,7 gigapixels. Ninety-five unique SNOMED CT codes were used. The time taken to annotate a WSI varied from 45 s to over 360 min, a total time of approximately 360 h. Conclusion: This work resulted in a dataset of 200 exhaustive annotated WSIs of normal and abnormal tissue from the colon and skin, and it has informed plans to build a comprehensive library of annotated WSIs. SNOMED CT was found to be the best ontology for annotation labeling. This project also demonstrates the need for future development of annotation tools in order to make the annotation process more efficient.
Collapse
Affiliation(s)
- Karin Lindman
- Department of Clinical Pathology, Region Östergötland, Linköping, Sweden
| | - Jerómino F Rose
- Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden
| | - Martin Lindvall
- Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping and Sectra AB, Sweden
| | - Claes Lundström
- Center for Medical Image Science and Visualization, Linköping University, Linköping and Sectra AB, Linköping, Sweden
| | - Darren Treanor
- Department of Clinical Pathology, Region Östergötland, Linköping, Sweden.,Department of Clinical Pathology, and Department of Clinical and Experimental Medicine (IKE), Linköping University, Linköping, Sweden.,Department of Cellular Pathology, St. James University Hospital, Leeds, UK
| |
Collapse
|
5
|
Campos L, Pedro V, Couto F. Impact of translation on named-entity recognition in radiology texts. DATABASE-THE JOURNAL OF BIOLOGICAL DATABASES AND CURATION 2018; 2017:4097790. [PMID: 29220455 PMCID: PMC5737072 DOI: 10.1093/database/bax064] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Accepted: 08/03/2017] [Indexed: 11/17/2022]
Abstract
Radiology reports describe the results of radiography procedures and have the potential of being a useful source of information which can bring benefits to health care systems around the world. One way to automatically extract information from the reports is by using Text Mining tools. The problem is that these tools are mostly developed for English and reports are usually written in the native language of the radiologist, which is not necessarily English. This creates an obstacle to the sharing of Radiology information between different communities. This work explores the solution of translating the reports to English before applying the Text Mining tools, probing the question of what translation approach should be used. We created MRRAD (Multilingual Radiology Research Articles Dataset), a parallel corpus of Portuguese research articles related to Radiology and a number of alternative translations (human, automatic and semi-automatic) to English. This is a novel corpus which can be used to move forward the research on this topic. Using MRRAD we studied which kind of automatic or semi-automatic translation approach is more effective on the Named-entity recognition task of finding RadLex terms in the English version of the articles. Considering the terms extracted from human translations as our gold standard, we calculated how similar to this standard were the terms extracted using other translations. We found that a completely automatic translation approach using Google leads to F-scores (between 0.861 and 0.868, depending on the extraction approach) similar to the ones obtained through a more expensive semi-automatic translation approach using Unbabel (between 0.862 and 0.870). To better understand the results we also performed a qualitative analysis of the type of errors found in the automatic and semi-automatic translations. Database URL:https://github.com/lasigeBioTM/MRRAD
Collapse
Affiliation(s)
- Luís Campos
- LASIGE, Departamento de Informática, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa, Portugal
| | - Vasco Pedro
- Unbabel Lda, Rua Visconde de Santarém, 67-B, 1000-286 Lisboa, Portugal
| | - Francisco Couto
- LASIGE, Departamento de Informática, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa, Portugal
| |
Collapse
|
6
|
A fully automatic end-to-end method for content-based image retrieval of CT scans with similar liver lesion annotations. Int J Comput Assist Radiol Surg 2017; 13:165-174. [PMID: 29147954 DOI: 10.1007/s11548-017-1687-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2017] [Accepted: 11/06/2017] [Indexed: 10/18/2022]
Abstract
PURPOSE The goal of medical content-based image retrieval (M-CBIR) is to assist radiologists in the decision-making process by retrieving medical cases similar to a given image. One of the key interests of radiologists is lesions and their annotations, since the patient treatment depends on the lesion diagnosis. Therefore, a key feature of M-CBIR systems is the retrieval of scans with the most similar lesion annotations. To be of value, M-CBIR systems should be fully automatic to handle large case databases. METHODS We present a fully automatic end-to-end method for the retrieval of CT scans with similar liver lesion annotations. The input is a database of abdominal CT scans labeled with liver lesions, a query CT scan, and optionally one radiologist-specified lesion annotation of interest. The output is an ordered list of the database CT scans with the most similar liver lesion annotations. The method starts by automatically segmenting the liver in the scan. It then extracts a histogram-based features vector from the segmented region, learns the features' relative importance, and ranks the database scans according to the relative importance measure. The main advantages of our method are that it fully automates the end-to-end querying process, that it uses simple and efficient techniques that are scalable to large datasets, and that it produces quality retrieval results using an unannotated CT scan. RESULTS Our experimental results on 9 CT queries on a dataset of 41 volumetric CT scans from the 2014 Image CLEF Liver Annotation Task yield an average retrieval accuracy (Normalized Discounted Cumulative Gain index) of 0.77 and 0.84 without/with annotation, respectively. CONCLUSIONS Fully automatic end-to-end retrieval of similar cases based on image information alone, rather that on disease diagnosis, may help radiologists to better diagnose liver lesions.
Collapse
|
7
|
Faria AV, Liang Z, Miller MI, Mori S. Brain MRI Pattern Recognition Translated to Clinical Scenarios. Front Neurosci 2017; 11:578. [PMID: 29104527 PMCID: PMC5655969 DOI: 10.3389/fnins.2017.00578] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 10/02/2017] [Indexed: 12/27/2022] Open
Abstract
We explored the performance of structure-based computational analysis in four neurodegenerative conditions [Ataxia (AT, n = 16), Huntington's Disease (HD, n = 52), Alzheimer's Disease (AD, n = 66), and Primary Progressive Aphasia (PPA, n = 50)], all characterized by brain atrophy. The independent variables were the volumes of 283 anatomical areas, derived from automated segmentation of T1-high resolution brain MRIs. The segmentation based volumetric quantification reduces image dimensionality from the voxel level [on the order of O(106)] to anatomical structures [O(102)] for subsequent statistical analysis. We evaluated the effectiveness of this approach on extracting anatomical features, already described by human experience and a priori biological knowledge, in specific scenarios: (1) when pathologies were relatively homogeneous, with evident image alterations (e.g., AT); (2) when the time course was highly correlated with the anatomical changes (e.g., HD), an analogy for prediction; (3) when the pathology embraced heterogeneous phenotypes (e.g., AD) so the classification was less efficient but, in compensation, anatomical and clinical information were less redundant; and (4) when the entity was composed of multiple subgroups that had some degree of anatomical representation (e.g., PPA), showing the potential of this method for the clustering of more homogeneous phenotypes that can be of clinical importance. Using the structure-based quantification and simple linear classifiers (partial least square), we achieve 87.5 and 73% of accuracy on differentiating AT and pre-symptomatic HD patents from controls, respectively. More importantly, the anatomical features automatically revealed by the classifiers agreed with the patterns previously described on these pathologies. The accuracy was lower (68%) on differentiating AD from controls, as AD does not display a clear anatomical phenotype. On the other hand, the method identified PPA clinical phenotypes and their respective anatomical signatures. Although most of the data are presented here as proof of concept in simulated clinical scenarios, structure-based analysis was potentially effective in characterizing phenotypes, retrieving relevant anatomical features, predicting prognosis, and aiding diagnosis, with the advantage of being easily translatable to clinics and understandable biologically.
Collapse
Affiliation(s)
- Andreia V Faria
- Department of Radiology, Johns Hopkins University, Baltimore, MD, United States
| | - Zifei Liang
- Department of Radiology, New York University, New York, NY, United States
| | - Michael I Miller
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Susumu Mori
- Department of Radiology, Johns Hopkins University, Baltimore, MD, United States
| |
Collapse
|