1
|
Silva W, Gonçalves T, Härmä K, Schröder E, Obmann VC, Barroso MC, Poellinger A, Reyes M, Cardoso JS. Computer-aided diagnosis through medical image retrieval in radiology. Sci Rep 2022; 12:20732. [PMID: 36456605 PMCID: PMC9715673 DOI: 10.1038/s41598-022-25027-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 11/23/2022] [Indexed: 12/02/2022] Open
Abstract
Currently, radiologists face an excessive workload, which leads to high levels of fatigue, and consequently, to undesired diagnosis mistakes. Decision support systems can be used to prioritize and help radiologists making quicker decisions. In this sense, medical content-based image retrieval systems can be of extreme utility by providing well-curated similar examples. Nonetheless, most medical content-based image retrieval systems work by finding the most similar image, which is not equivalent to finding the most similar image in terms of disease and its severity. Here, we propose an interpretability-driven and an attention-driven medical image retrieval system. We conducted experiments in a large and publicly available dataset of chest radiographs with structured labels derived from free-text radiology reports (MIMIC-CXR-JPG). We evaluated the methods on two common conditions: pleural effusion and (potential) pneumonia. As ground-truth to perform the evaluation, query/test and catalogue images were classified and ordered by an experienced board-certified radiologist. For a profound and complete evaluation, additional radiologists also provided their rankings, which allowed us to infer inter-rater variability, and yield qualitative performance levels. Based on our ground-truth ranking, we also quantitatively evaluated the proposed approaches by computing the normalized Discounted Cumulative Gain (nDCG). We found that the Interpretability-guided approach outperforms the other state-of-the-art approaches and shows the best agreement with the most experienced radiologist. Furthermore, its performance lies within the observed inter-rater variability.
Collapse
Affiliation(s)
- Wilson Silva
- grid.20384.3d0000 0004 0500 6380INESC TEC, Porto, Portugal ,grid.5808.50000 0001 1503 7226Faculty of Engineering, University of Porto, Porto, Portugal
| | - Tiago Gonçalves
- grid.20384.3d0000 0004 0500 6380INESC TEC, Porto, Portugal ,grid.5808.50000 0001 1503 7226Faculty of Engineering, University of Porto, Porto, Portugal
| | - Kirsi Härmä
- grid.5734.50000 0001 0726 5157Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Erich Schröder
- grid.5734.50000 0001 0726 5157Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Verena Carola Obmann
- grid.5734.50000 0001 0726 5157Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - María Cecilia Barroso
- grid.5734.50000 0001 0726 5157Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Alexander Poellinger
- grid.5734.50000 0001 0726 5157Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Mauricio Reyes
- grid.5734.50000 0001 0726 5157ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Jaime S. Cardoso
- grid.20384.3d0000 0004 0500 6380INESC TEC, Porto, Portugal ,grid.5808.50000 0001 1503 7226Faculty of Engineering, University of Porto, Porto, Portugal
| |
Collapse
|
2
|
Praveena HD, Guptha NS, Kazemzadeh A, Parameshachari BD, Hemalatha KL. Effective CBMIR System Using Hybrid Features-Based Independent Condensed Nearest Neighbor Model. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:3297316. [PMID: 35378946 PMCID: PMC8976656 DOI: 10.1155/2022/3297316] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/29/2022] [Revised: 02/28/2022] [Accepted: 03/08/2022] [Indexed: 11/18/2022]
Abstract
In recent times, a large number of medical images are generated, due to the evolution of digital imaging modalities and computer vision application. Due to variation in the shape and size of the images, the retrieval task becomes more tedious in the large medical databases. So, it is essential in designing an effective automated system for medical image retrieval. In this research study, the input medical images are acquired from new Pap smear dataset, and then, the visible quality of acquired medical images is improved by applying image normalization technique. Furthermore, the hybrid feature extraction is accomplished using histogram of oriented gradients and modified local binary pattern to extract the color and texture feature vectors that significantly reduces the semantic gap between the feature vectors. The obtained feature vectors are fed to the independent condensed nearest neighbor classifier to classify the seven classes of cell images. Finally, relevant medical images are retrieved using chi square distance measure. Simulation results confirmed that the proposed model obtained effective performance in image retrieval in light of specificity, recall, precision, accuracy, and f-score. The proposed model almost achieved 98.88% of retrieval accuracy, which is better compared to other deep learning models such as long short-term memory network, deep neural network, and convolutional neural network.
Collapse
Affiliation(s)
- Hirald Dwaraka Praveena
- Department of Electronics and Communication Engineering, Sree Vidyanikethan Engineering College, Tirupati 517102, Andhra Pradesh, India
| | - Nirmala S. Guptha
- Department of CSE-Artificial Intelligence, Sri Venkateshwara College of Engineering, Bengaluru 562157, India
| | | | - B. D. Parameshachari
- Department of Telecommunication Engineering, GSSS Institute of Engineering and Technology for Women, Mysuru 570016, India
| | - K. L. Hemalatha
- Department of ISE, Sri Krishna Institute of Technology, Bengaluru 560090, India
| |
Collapse
|
3
|
Qi S, Xu C, Li C, Tian B, Xia S, Ren J, Yang L, Wang H, Yu H. DR-MIL: deep represented multiple instance learning distinguishes COVID-19 from community-acquired pneumonia in CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 211:106406. [PMID: 34536634 PMCID: PMC8426140 DOI: 10.1016/j.cmpb.2021.106406] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Accepted: 09/02/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Given that the novel coronavirus disease 2019 (COVID-19) has become a pandemic, a method to accurately distinguish COVID-19 from community-acquired pneumonia (CAP) is urgently needed. However, the spatial uncertainty and morphological diversity of COVID-19 lesions in the lungs, and subtle differences with respect to CAP, make differential diagnosis non-trivial. METHODS We propose a deep represented multiple instance learning (DR-MIL) method to fulfill this task. A 3D volumetric CT scan of one patient is treated as one bag and ten CT slices are selected as the initial instances. For each instance, deep features are extracted from the pre-trained ResNet-50 with fine-tuning and represented as one deep represented instance score (DRIS). Each bag with a DRIS for each initial instance is then input into a citation k-nearest neighbor search to generate the final prediction. A total of 141 COVID-19 and 100 CAP CT scans were used. The performance of DR-MIL is compared with other potential strategies and state-of-the-art models. RESULTS DR-MIL displayed an accuracy of 95% and an area under curve of 0.943, which were superior to those observed for comparable methods. COVID-19 and CAP exhibited significant differences in both the DRIS and the spatial pattern of lesions (p<0.001). As a means of content-based image retrieval, DR-MIL can identify images used as key instances, references, and citers for visual interpretation. CONCLUSIONS DR-MIL can effectively represent the deep characteristics of COVID-19 lesions in CT images and accurately distinguish COVID-19 from CAP in a weakly supervised manner. The resulting DRIS is a useful supplement to visual interpretation of the spatial pattern of lesions when screening for COVID-19.
Collapse
Affiliation(s)
- Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China
| | - Caiwen Xu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Chen Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China
| | - Bin Tian
- Department of Radiology, The Second People's Hospital of Guiyang, Guiyang, China
| | - Shuyue Xia
- Department of Respiratory Medicine, Central Hospital Affiliated to Shenyang Medical College, Shenyang, China
| | - Jigang Ren
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Liming Yang
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China
| | - Hanlin Wang
- Department of Radiology, General Hospital of the Yangtze River Shipping, Wuhan, China.
| | - Hui Yu
- Department of Radiology, The Affiliated Hospital of Guizhou Medical University, Guiyang, China.
| |
Collapse
|
4
|
Abstract
COVID-19, an infectious coronavirus disease, caused a pandemic with countless deaths. From the outset, clinical institutes have explored computed tomography as an effective and complementary screening tool alongside the reverse transcriptase-polymerase chain reaction. Deep learning techniques have shown promising results in similar medical tasks and, hence, may provide solutions to COVID-19 based on medical images of patients. We aim to contribute to the research in this field by: (i) Comparing different architectures on a public and extended reference dataset to find the most suitable; (ii) Proposing a patient-oriented investigation of the best performing networks; and (iii) Evaluating their robustness in a real-world scenario, represented by cross-dataset experiments. We exploited ten well-known convolutional neural networks on two public datasets. The results show that, on the reference dataset, the most suitable architecture is VGG19, which (i) Achieved 98.87% accuracy in the network comparison; (ii) Obtained 95.91% accuracy on the patient status classification, even though it misclassifies some patients that other networks classify correctly; and (iii) The cross-dataset experiments exhibit the limitations of deep learning approaches in a real-world scenario with 70.15% accuracy, which need further investigation to improve the robustness. Thus, VGG19 architecture showed promising performance in the classification of COVID-19 cases. Nonetheless, this architecture enables extensive improvements based on its modification, or even with preprocessing step in addition to it. Finally, the cross-dataset experiments exposed the critical weakness of classifying images from heterogeneous data sources, compatible with a real-world scenario.
Collapse
|
5
|
|
6
|
Rajasenbagam T, Jeyanthi S, Pandian JA. Detection of pneumonia infection in lungs from chest X-ray images using deep convolutional neural network and content-based image retrieval techniques. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2021:1-8. [PMID: 33777251 PMCID: PMC7985744 DOI: 10.1007/s12652-021-03075-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/07/2020] [Accepted: 03/02/2021] [Indexed: 06/12/2023]
Abstract
In this research, A Deep Convolutional Neural Network was proposed to detect Pneumonia infection in the lung using Chest X-ray images. The proposed Deep CNN models were trained with a Pneumonia Chest X-ray Dataset containing 12,000 images of infected and not infected chest X-ray images. The dataset was preprocessed and developed from the Chest X-ray8 dataset. The Content-based image retrieval technique was used to annotate the images in the dataset using Metadata and further contents. The data augmentation techniques were used to increase the number of images in each of class. The basic manipulation techniques and Deep Convolutional Generative Adversarial Network (DCGAN) were used to create the augmented images. The VGG19 network was used to develop the proposed Deep CNN model. The classification accuracy of the proposed Deep CNN model was 99.34 percent in the unseen chest X-ray images. The performance of the proposed deep CNN was compared with state-of-the-art transfer learning techniques such as AlexNet, VGG16Net and InceptionNet. The comparison results show that the classification performance of the proposed Deep CNN model was greater than the other techniques.
Collapse
Affiliation(s)
- T. Rajasenbagam
- Department of CSE, Government College of Technology, Coimbatore, India
| | - S. Jeyanthi
- Department of CSE, PSNA College of Engineering and Technology, Dindigul, India
| | - J. Arun Pandian
- Department of CSE, Vel Tech Rangarajan Dr.Sagunthala R&D Institute of Science and Technology, Avadi, India
| |
Collapse
|
7
|
Abd Elaziz M, Nabil N, Moghdani R, Ewees AA, Cuevas E, Lu S. Multilevel thresholding image segmentation based on improved volleyball premier league algorithm using whale optimization algorithm. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:12435-12468. [PMID: 33456315 PMCID: PMC7797715 DOI: 10.1007/s11042-020-10313-w] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Revised: 08/26/2020] [Accepted: 12/22/2020] [Indexed: 05/31/2023]
Abstract
Multilevel thresholding image segmentation has received considerable attention in several image processing applications. However, the process of determining the optimal threshold values (as the preprocessing step) is time-consuming when traditional methods are used. Although these limitations can be addressed by applying metaheuristic methods, such approaches may be idle with a local solution. This study proposed an alternative multilevel thresholding image segmentation method called VPLWOA, which is an improved version of the volleyball premier league (VPL) algorithm using the whale optimization algorithm (WOA). In VPLWOA, the WOA is used as a local search system to improve the learning phase of the VPL algorithm. A set of experimental series is performed using two different image datasets to assess the performance of the VPLWOA in determining the values that may be optimal threshold, and the performance of this algorithm is compared with other approaches. Experimental results show that the proposed VPLWOA outperforms the other approaches in terms of several performance measures, such as signal-to-noise ratio and structural similarity index.
Collapse
Affiliation(s)
- Mohamed Abd Elaziz
- Department of Mathematics, Faculty of Science, Zagazig University, Zagazig, Egypt
| | - Neggaz Nabil
- Faculté des mathématiques et informatique - Département d’Informatique- Laboratoire SIMPA, Université des Sciences et de la Technologie d’Oran Mohammed Boudiaf, USTO-MB, BP 1505, El M’naouer, 31000 Oran, Algeria
| | - Reza Moghdani
- Industrial Management Department, Persian Gulf University, Boushehr, Iran
| | - Ahmed A. Ewees
- Department of Computer, Damietta University, Damietta, Egypt
| | - Erik Cuevas
- Departamento de Electrónica, Universidad de Guadalajara, CUCEI Av. Revolución 1500, 44430 Guadalajara, Mexico
| | - Songfeng Lu
- School of Cyber Science and Engineering, Huazhong University of Science and Technology, Wuhan, 430074 China
| |
Collapse
|
8
|
An Efficient Content-Based Image Retrieval System for the Diagnosis of Lung Diseases. J Digit Imaging 2020; 33:971-987. [PMID: 32399717 DOI: 10.1007/s10278-020-00338-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022] Open
Abstract
The main problem in content-based image retrieval (CBIR) systems is the semantic gap which needs to be reduced for efficient retrieval. The common imaging signs (CISs) which appear in the patient's lung CT scan play a significant role in the identification of cancerous lung nodules and many other lung diseases. In this paper, we propose a new combination of descriptors for the effective retrieval of these imaging signs. First, we construct a feature database by combining local ternary pattern (LTP), local phase quantization (LPQ), and discrete wavelet transform. Next, joint mutual information (JMI)-based feature selection is deployed to reduce the redundancy and to select an optimal feature set for CISs retrieval. To this end, similarity measurement is performed by combining visual and semantic information in equal proportion to construct a balanced graph and the shortest path is computed for learning contextual similarity to obtain final similarity between each query and database image. The proposed system is evaluated on a publicly available database of lung CT imaging signs (LISS), and results are retrieved based on visual feature similarity comparison and graph-based similarity comparison. The proposed system achieves a mean average precision (MAP) of 60% and 0.48 AUC of precision-recall (P-R) graph using only visual features similarity comparison. These results further improve on graph-based similarity measure with a MAP of 70% and 0.58 AUC which shows the superiority of our proposed scheme.
Collapse
|
9
|
France K, Jaya A. Classification and retrieval of thoracic diseases using patch-based visual words: a study on chest x-rays. Biomed Phys Eng Express 2020; 6:025015. [PMID: 33438641 DOI: 10.1088/2057-1976/ab5c7c] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
This research work explores the Content-Based Medical Image Retrieval system (CBMIR) to categorization and retrieval of different types of common thoracic diseases such as Atelectasis, cardiomegaly, Effusion, Infiltration etc, based on local patch representation of 'Bag of Visual Words' approach, when performing patch-based image representation, the selected patch size has significant impact on image categorization and retrieval process. It is a challenging task in selecting the appropriate patch size to the current experimental dataset. Chest Xray8 medical image database is used, to analyze the impact of different patch size to categorize and retrieval of eight common thorax diseases. 1000 frontal view x-ray images is obtained (100 images from each category and 200 images combination of more than one disease) from the database. Different sizes of image patches (16 × 16 and 32 × 32) and different codebook sizes (500, 1000, 1500, 2000) created to identify best precision and recall values. From the excremental result, 32 × 32 patch size and 1500 codebook size gives the good precision and recall value using Radial Basis Function SVM kernel.
Collapse
Affiliation(s)
- K France
- Research Scholar Department of Computer Applications, B S Abdur Rahman Crescent Institute of Science and Technology, Chennai-600048, India
| | - A Jaya
- Professor Department of Computer Applications, B S Abdur Rahman Crescent Institute of Science and Technology, Chennai-600048, India
| |
Collapse
|
10
|
A multi-level similarity measure for the retrieval of the common CT imaging signs of lung diseases. Med Biol Eng Comput 2020; 58:1015-1029. [PMID: 32124223 DOI: 10.1007/s11517-020-02146-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2019] [Accepted: 02/13/2020] [Indexed: 12/20/2022]
Abstract
The common CT imaging signs of lung diseases (CISLs) which frequently appear in lung CT images are widely used in the diagnosis of lung diseases. Computer-aided diagnosis (CAD) based on the CISLs can improve radiologists' performance in the diagnosis of lung diseases. Since similarity measure is important for CAD, we propose a multi-level method to measure the similarity between the CISLs. The CISLs are characterized in the low-level visual scale, mid-level attribute scale, and high-level semantic scale, for a rich representation. The similarity at multiple levels is calculated and combined in a weighted sum form as the final similarity. The proposed multi-level similarity method is capable of computing the level-specific similarity and optimal cross-level complementary similarity. The effectiveness of the proposed similarity measure method is evaluated on a dataset of 511 lung CT images from clinical patients for CISLs retrieval. It can achieve about 80% precision and take only 3.6 ms for the retrieval process. The extensive comparative evaluations on the same datasets are conducted to validate the advantages on retrieval performance of our multi-level similarity measure over the single-level measure and the two-level similarity methods. The proposed method can have wide applications in radiology and decision support. Graphical abstract.
Collapse
|
11
|
Babaie M, Kashani H, Kumar MD, Tizhoosh HR. A New Local Radon Descriptor for Content-Based Image Search. Artif Intell Med 2020. [DOI: 10.1007/978-3-030-59137-3_41] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
12
|
Wei G, Qiu M, Zhang K, Li M, Wei D, Li Y, Liu P, Cao H, Xing M, Yang F. A multi-feature image retrieval scheme for pulmonary nodule diagnosis. Medicine (Baltimore) 2020; 99:e18724. [PMID: 31977863 PMCID: PMC7004710 DOI: 10.1097/md.0000000000018724] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Deep analysis of radiographic images can quantify the extent of intra-tumoral heterogeneity for personalized medicine.In this paper, we propose a novel content-based multi-feature image retrieval (CBMFIR) scheme to discriminate pulmonary nodules benign or malignant. Two types of features are applied to represent the pulmonary nodules. With each type of features, a single-feature distance metric model is proposed to measure the similarity of pulmonary nodules. And then, multiple single-feature distance metric models learned from different types of features are combined to a multi-feature distance metric model. Finally, the learned multi-feature distance metric is used to construct a content-based image retrieval (CBIR) scheme to assist the doctors in diagnosis of pulmonary nodules. The classification accuracy and retrieval accuracy are used to evaluate the performance of the scheme.The classification accuracy is 0.955 ± 0.010, and the retrieval accuracies outperform the comparison methods.The proposed CBMFIR scheme is effective in diagnosis of pulmonary nodules. Our method can better integrate multiple types of features from pulmonary nodules.
Collapse
Affiliation(s)
- Guohui Wei
- School of Science and Engineering, Shandong University of Traditional Chinese Medicine
- Shandong Provincial Key Laboratory for Distributed Computer Software Novel Technology, Jinan, China
| | - Min Qiu
- Affiliated Hospital of Jining Medical University
| | - Kuixing Zhang
- School of Science and Engineering, Shandong University of Traditional Chinese Medicine
| | - Ming Li
- School of Science and Engineering, Shandong University of Traditional Chinese Medicine
| | - Dejian Wei
- School of Science and Engineering, Shandong University of Traditional Chinese Medicine
| | - Yanjun Li
- School of Science and Engineering, Shandong University of Traditional Chinese Medicine
| | - Peiyu Liu
- Shandong Provincial Key Laboratory for Distributed Computer Software Novel Technology, Jinan, China
| | - Hui Cao
- School of Science and Engineering, Shandong University of Traditional Chinese Medicine
| | - Mengmeng Xing
- School of Science and Engineering, Shandong University of Traditional Chinese Medicine
| | - Feng Yang
- School of Science and Engineering, Shandong University of Traditional Chinese Medicine
| |
Collapse
|
13
|
|
14
|
Content-Based Color Image Retrieval Using Block Truncation Coding Based on Binary Ant Colony Optimization. Symmetry (Basel) 2018. [DOI: 10.3390/sym11010021] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
In this paper, we propose a content-based image retrieval (CBIR) approach using color and texture features extracted from block truncation coding based on binary ant colony optimization (BACOBTC). First, we present a near-optimized common bitmap scheme for BTC. Then, we convert the image to two color quantizers and a bitmap image-utilizing BACOBTC. Subsequently, the color and texture features, i.e., the color histogram feature (CHF) and the bit pattern histogram feature (BHF) are extracted to measure the similarity between a query image and the target image in the database and retrieve the desired image. The performance of the proposed approach was compared with several former image-retrieval schemes. The results were evaluated in terms of Precision-Recall and Average Retrieval Rate, and they showed that our approach outperformed the referenced approaches.
Collapse
|
15
|
An Efficient Middle Layer Platform for Medical Imaging Archives. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:3984061. [PMID: 30034674 PMCID: PMC6033252 DOI: 10.1155/2018/3984061] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Revised: 04/29/2018] [Accepted: 05/09/2018] [Indexed: 11/17/2022]
Abstract
Digital medical image usage is common in health services and clinics. These data have a vital importance for diagnosis and treatment; therefore, preservation, protection, and archiving of these data are a challenge. Rapidly growing file sizes differentiated data formats and increasing number of files constitute big data, which traditional systems do not have the capability to process and store these data. This study investigates an efficient middle layer platform based on Hadoop and MongoDB architecture using the state-of-the-art technologies in the literature. We have developed this system to improve the medical image compression method that we have developed before to create a middle layer platform that performs data compression and archiving operations. With this study, a platform using MapReduce programming model on Hadoop has been developed that can be scalable. MongoDB, a NoSQL database, has been used to satisfy performance requirements of the platform. A four-node Hadoop cluster has been built to evaluate the developed platform and execute distributed MapReduce algorithms. The actual patient medical images have been used to validate the performance of the platform. The processing of test images takes 15,599 seconds on a single node, but on the developed platform, this takes 8,153 seconds. Moreover, due to the medical imaging processing package used in the proposed method, the compression ratio values produced for the non-ROI image are between 92.12% and 97.84%. In conclusion, the proposed platform provides a cloud-based integrated solution to the medical image archiving problem.
Collapse
|
16
|
Content-based image retrieval for Lung Nodule Classification Using Texture Features and Learned Distance Metric. J Med Syst 2017; 42:13. [PMID: 29185058 DOI: 10.1007/s10916-017-0874-5] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Accepted: 11/22/2017] [Indexed: 10/18/2022]
Abstract
Similarity measurement of lung nodules is a critical component in content-based image retrieval (CBIR), which can be useful in differentiating between benign and malignant lung nodules on computer tomography (CT). This paper proposes a new two-step CBIR scheme (TSCBIR) for computer-aided diagnosis of lung nodules. Two similarity metrics, semantic relevance and visual similarity, are introduced to measure the similarity of different nodules. The first step is to search for K most similar reference ROIs for each queried ROI with the semantic relevance metric. The second step is to weight each retrieved ROI based on its visual similarity to the queried ROI. The probability is computed to predict the likelihood of the queried ROI depicting a malignant lesion. In order to verify the feasibility of the proposed algorithm, a lung nodule dataset including 366 nodule regions of interest (ROIs) is assembled from LIDC-IDRI lung images on CT scans. Three groups of texture features are implemented to represent a nodule ROI. Our experimental results on the assembled lung nodule dataset show good performance improvement over existing popular classifiers.
Collapse
|