1
|
Song Y, Wang J, Ge Y, Li L, Guo J, Dong Q, Liao Z. Medical image classification: Knowledge transfer via residual U-Net and vision transformer-based teacher-student model with knowledge distillation. JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION 2024; 102:104212. [DOI: 10.1016/j.jvcir.2024.104212] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2025]
|
2
|
Guo L, Zhou C, Xu J, Huang C, Yu Y, Lu G. Deep Learning for Chest X-ray Diagnosis: Competition Between Radiologists with or Without Artificial Intelligence Assistance. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:922-934. [PMID: 38332402 PMCID: PMC11169143 DOI: 10.1007/s10278-024-00990-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Revised: 12/13/2023] [Accepted: 12/13/2023] [Indexed: 02/10/2024]
Abstract
This study aimed to assess the performance of a deep learning algorithm in helping radiologist achieve improved efficiency and accuracy in chest radiograph diagnosis. We adopted a deep learning algorithm to concurrently detect the presence of normal findings and 13 different abnormalities in chest radiographs and evaluated its performance in assisting radiologists. Each competing radiologist had to determine the presence or absence of these signs based on the label provided by the AI. The 100 radiographs were randomly divided into two sets for evaluation: one without AI assistance (control group) and one with AI assistance (test group). The accuracy, false-positive rate, false-negative rate, and analysis time of 111 radiologists (29 senior, 32 intermediate, and 50 junior) were evaluated. A radiologist was given an initial score of 14 points for each image read, with 1 point deducted for an incorrect answer and 0 points given for a correct answer. The final score for each doctor was automatically calculated by the backend calculator. We calculated the mean scores of each radiologist in the two groups (the control group and the test group) and calculated the mean scores to evaluate the performance of the radiologists with and without AI assistance. The average score of the 111 radiologists was 597 (587-605) in the control group and 619 (612-626) in the test group (P < 0.001). The time spent by the 111 radiologists on the control and test groups was 3279 (2972-3941) and 1926 (1710-2432) s, respectively (P < 0.001). The performance of the 111 radiologists in the two groups was evaluated by the area under the receiver operating characteristic curve (AUC). The radiologists showed better performance on the test group of radiographs in terms of normal findings, pulmonary fibrosis, heart shadow enlargement, mass, pleural effusion, and pulmonary consolidation recognition, with AUCs of 1.0, 0.950, 0.991, 1.0, 0.993, and 0.982, respectively. The radiologists alone showed better performance in aortic calcification (0.993), calcification (0.933), cavity (0.963), nodule (0.923), pleural thickening (0.957), and rib fracture (0.987) recognition. This competition verified the positive effects of deep learning methods in assisting radiologists in interpreting chest X-rays. AI assistance can help to improve both the efficacy and efficiency of radiologists.
Collapse
Affiliation(s)
- Lili Guo
- Department of Radiology, The Affiliated Huaian No. 1 People's Hospital of Nanjing Medical University, Huai'an, 223300, China.
| | - Changsheng Zhou
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China
| | - Jingxu Xu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Chencui Huang
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Yizhou Yu
- Deepwise AI Lab, Beijing Deepwise & League of PHD Technology Co., Ltd, Beijing, 100080, China
| | - Guangming Lu
- Department of Medical Imaging, Jinling Hospital, Medical School of Nanjing University, Nanjing, 210002, China.
| |
Collapse
|
3
|
Loughrey C, Fitzpatrick P, Orr N, Jurek-Loughrey A. The topology of data: Opportunities for cancer research. Bioinformatics 2021; 37:3091-3098. [PMID: 34320632 PMCID: PMC8504620 DOI: 10.1093/bioinformatics/btab553] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 06/14/2021] [Accepted: 07/28/2021] [Indexed: 01/20/2023] Open
Abstract
Motivation Topological methods have recently emerged as a reliable and interpretable framework for extracting information from high-dimensional data, leading to the creation of a branch of applied mathematics called Topological Data Analysis (TDA). Since then, TDA has been progressively adopted in biomedical research. Biological data collection can result in enormous datasets, comprising thousands of features and spanning diverse datatypes. This presents a barrier to initial data analysis as the fundamental structure of the dataset becomes hidden, obstructing the discovery of important features and patterns. TDA provides a solution to obtain the underlying shape of datasets over continuous resolutions, corresponding to key topological features independent of noise. TDA has the potential to support future developments in healthcare as biomedical datasets rise in complexity and dimensionality. Previous applications extend across the fields of neuroscience, oncology, immunology and medical image analysis. TDA has been used to reveal hidden subgroups of cancer patients, construct organizational maps of brain activity and classify abnormal patterns in medical images. The utility of TDA is broad and to understand where current achievements lie, we have evaluated the present state of TDA in cancer data analysis. Results This article aims to provide an overview of TDA in Cancer Research. A brief introduction to the main concepts of TDA is provided to ensure that the article is accessible to readers who are not familiar with this field. Following this, a focussed literature review on the field is presented, discussing how TDA has been applied across heterogeneous datatypes for cancer research.
Collapse
Affiliation(s)
- Ciara Loughrey
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, BT9 5BN, United Kingdom
| | - Padraig Fitzpatrick
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, BT9 5BN, United Kingdom
| | - Nick Orr
- Patrick G Johnston Centre for Cancer Research, Queen's University Belfast, BT9 7AE, United Kingdom
| | - Anna Jurek-Loughrey
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, BT9 5BN, United Kingdom
| |
Collapse
|
4
|
A multi-level similarity measure for the retrieval of the common CT imaging signs of lung diseases. Med Biol Eng Comput 2020; 58:1015-1029. [PMID: 32124223 DOI: 10.1007/s11517-020-02146-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2019] [Accepted: 02/13/2020] [Indexed: 12/20/2022]
Abstract
The common CT imaging signs of lung diseases (CISLs) which frequently appear in lung CT images are widely used in the diagnosis of lung diseases. Computer-aided diagnosis (CAD) based on the CISLs can improve radiologists' performance in the diagnosis of lung diseases. Since similarity measure is important for CAD, we propose a multi-level method to measure the similarity between the CISLs. The CISLs are characterized in the low-level visual scale, mid-level attribute scale, and high-level semantic scale, for a rich representation. The similarity at multiple levels is calculated and combined in a weighted sum form as the final similarity. The proposed multi-level similarity method is capable of computing the level-specific similarity and optimal cross-level complementary similarity. The effectiveness of the proposed similarity measure method is evaluated on a dataset of 511 lung CT images from clinical patients for CISLs retrieval. It can achieve about 80% precision and take only 3.6 ms for the retrieval process. The extensive comparative evaluations on the same datasets are conducted to validate the advantages on retrieval performance of our multi-level similarity measure over the single-level measure and the two-level similarity methods. The proposed method can have wide applications in radiology and decision support. Graphical abstract.
Collapse
|
5
|
Chaki J, Dey N. Data Tagging in Medical Images: A Survey of the State-of-Art. Curr Med Imaging 2020; 16:1214-1228. [PMID: 32108002 DOI: 10.2174/1573405616666200218130043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Revised: 12/02/2019] [Accepted: 12/16/2019] [Indexed: 11/22/2022]
Abstract
A huge amount of medical data is generated every second, and a significant percentage of the data are images that need to be analyzed and processed. One of the key challenges in this regard is the recovery of the data of medical images. The medical image recovery procedure should be done automatically by the computers that are the method of identifying object concepts and assigning homologous tags to them. To discover the hidden concepts in the medical images, the lowlevel characteristics should be used to achieve high-level concepts and that is a challenging task. In any specific case, it requires human involvement to determine the significance of the image. To allow machine-based reasoning on the medical evidence collected, the data must be accompanied by additional interpretive semantics; a change from a pure data-intensive methodology to a model of evidence rich in semantics. In this state-of-art, data tagging methods related to medical images are surveyed which is an important aspect for the recognition of a huge number of medical images. Different types of tags related to the medical image, prerequisites of medical data tagging, different techniques to develop medical image tags, different medical image tagging algorithms and different tools that are used to create the tags are discussed in this paper. The aim of this state-of-art paper is to produce a summary and a set of guidelines for using the tags for the identification of medical images and to identify the challenges and future research directions of tagging medical images.
Collapse
Affiliation(s)
- Jyotismita Chaki
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, India
| | - Nilanjan Dey
- Department of Information Technology, Techno India College of Technology, West Bengal, India
| |
Collapse
|
6
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
7
|
Banerjee I, Kurtz C, Devorah AE, Do B, Rubin DL, Beaulieu CF. Relevance feedback for enhancing content based image retrieval and automatic prediction of semantic image features: Application to bone tumor radiographs. J Biomed Inform 2018; 84:123-135. [DOI: 10.1016/j.jbi.2018.07.002] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 06/06/2018] [Accepted: 07/03/2018] [Indexed: 11/30/2022]
|
8
|
Colucci S, Donini FM, Di Sciascio E. Logical comparison over RDF resources in bio-informatics. J Biomed Inform 2017; 76:87-101. [PMID: 29127041 DOI: 10.1016/j.jbi.2017.11.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Revised: 09/21/2017] [Accepted: 11/05/2017] [Indexed: 11/18/2022]
Abstract
Comparison of resources is a frequent task in different bio-informatics applications, including drug-target interaction, drug repositioning and mechanism of action understanding, among others. This paper proposes a general method for the logical comparison of resources modeled in Resource Description Framework and shows its distinguishing features with reference to the comparison of drugs. In particular, the method returns a description of the commonalities between resources, rather than a numerical value estimating their similarity and/or relatedness. The approach is domain-independent and may be flexibly adapted to heterogeneous use cases, according to a process for setting parameters which is completely explicit. The paper also presents an experiment using the dataset Bioportal as knowledge source; the experiment is fully reproducible, thanks to the elicitation of criteria and values for parameter customization.
Collapse
Affiliation(s)
- S Colucci
- Politecnico di Bari, Via Orabona 4, 70125 Bari, Italy.
| | - F M Donini
- Università della Tuscia, Via S. Maria in Gradi 4, 01100 Viterbo, Italy.
| | - E Di Sciascio
- Politecnico di Bari, Via Orabona 4, 70125 Bari, Italy.
| |
Collapse
|
9
|
Spanier AB, Cohen D, Joskowicz L. A new method for the automatic retrieval of medical cases based on the RadLex ontology. Int J Comput Assist Radiol Surg 2016; 12:471-484. [PMID: 27804009 DOI: 10.1007/s11548-016-1496-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Accepted: 10/18/2016] [Indexed: 10/20/2022]
Abstract
PURPOSE The goal of medical case-based image retrieval (M-CBIR) is to assist radiologists in the clinical decision-making process by finding medical cases in large archives that most resemble a given case. Cases are described by radiology reports comprised of radiological images and textual information on the anatomy and pathology findings. The textual information, when available in standardized terminology, e.g., the RadLex ontology, and used in conjunction with the radiological images, provides a substantial advantage for M-CBIR systems. METHODS We present a new method for incorporating textual radiological findings from medical case reports in M-CBIR. The input is a database of medical cases, a query case, and the number of desired relevant cases. The output is an ordered list of the most relevant cases in the database. The method is based on a new case formulation, the Augmented RadLex Graph and an Anatomy-Pathology List. It uses a new case relatedness metric [Formula: see text] that prioritizes more specific medical terms in the RadLex tree over less specific ones and that incorporates the length of the query case. RESULTS An experimental study on 8 CT queries from the 2015 VISCERAL 3D Case Retrieval Challenge database consisting of 1497 volumetric CT scans shows that our method has accuracy rates of 82 and 70% on the first 10 and 30 most relevant cases, respectively, thereby outperforming six other methods. CONCLUSIONS The increasing amount of medical imaging data acquired in clinical practice constitutes a vast database of untapped diagnostically relevant information. This paper presents a new hybrid approach to retrieving the most relevant medical cases based on textual and image information.
Collapse
Affiliation(s)
- A B Spanier
- The Rachel and Selim Benin School of Computer Science and Engineering, The Hebrew University of Jerusalem, Givat Ram Campus, 91904, Jerusalem, Israel. .,Alexander Grass Center for Bioengineering, The Hebrew University of Jerusalem, Jerusalem, Israel.
| | - D Cohen
- The Rachel and Selim Benin School of Computer Science and Engineering, The Hebrew University of Jerusalem, Givat Ram Campus, 91904, Jerusalem, Israel
| | - L Joskowicz
- The Rachel and Selim Benin School of Computer Science and Engineering, The Hebrew University of Jerusalem, Givat Ram Campus, 91904, Jerusalem, Israel
| |
Collapse
|
10
|
Qian Z, Zhong P, Chen J. Integrating global and local visual features with semantic hierarchies for two-level image annotation. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.07.094] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
11
|
Abstract
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
Collapse
|
12
|
Kurtz C, Depeursinge A, Napel S, Beaulieu CF, Rubin DL. On combining image-based and ontological semantic dissimilarities for medical image retrieval applications. Med Image Anal 2014; 18:1082-100. [PMID: 25036769 PMCID: PMC4173098 DOI: 10.1016/j.media.2014.06.009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2013] [Revised: 06/18/2014] [Accepted: 06/23/2014] [Indexed: 10/25/2022]
Abstract
Computer-assisted image retrieval applications can assist radiologists by identifying similar images in archives as a means to providing decision support. In the classical case, images are described using low-level features extracted from their contents, and an appropriate distance is used to find the best matches in the feature space. However, using low-level image features to fully capture the visual appearance of diseases is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. To deal with this issue, the use of semantic terms to provide high-level descriptions of radiological image contents has recently been advocated. Nevertheless, most of the existing semantic image retrieval strategies are limited by two factors: they require manual annotation of the images using semantic terms and they ignore the intrinsic visual and semantic relationships between these annotations during the comparison of the images. Based on these considerations, we propose an image retrieval framework based on semantic features that relies on two main strategies: (1) automatic "soft" prediction of ontological terms that describe the image contents from multi-scale Riesz wavelets and (2) retrieval of similar images by evaluating the similarity between their annotations using a new term dissimilarity measure, which takes into account both image-based and ontological term relations. The combination of these strategies provides a means of accurately retrieving similar images in databases based on image annotations and can be considered as a potential solution to the semantic gap problem. We validated this approach in the context of the retrieval of liver lesions from computed tomographic (CT) images and annotated with semantic terms of the RadLex ontology. The relevance of the retrieval results was assessed using two protocols: evaluation relative to a dissimilarity reference standard defined for pairs of images on a 25-images dataset, and evaluation relative to the diagnoses of the retrieved images on a 72-images dataset. A normalized discounted cumulative gain (NDCG) score of more than 0.92 was obtained with the first protocol, while AUC scores of more than 0.77 were obtained with the second protocol. This automatical approach could provide real-time decision support to radiologists by showing them similar images with associated diagnoses and, where available, responses to therapies.
Collapse
Affiliation(s)
- Camille Kurtz
- Department of Radiology, School of Medicine, Stanford University, USA; LIPADE Laboratory (EA 2517), University Paris Descartes, France.
| | | | - Sandy Napel
- Department of Radiology, School of Medicine, Stanford University, USA.
| | | | - Daniel L Rubin
- Department of Radiology, School of Medicine, Stanford University, USA.
| |
Collapse
|