1
|
Yang Y, Fu H, Aviles-Rivero AI, Xing Z, Zhu L. DiffMIC-v2: Medical Image Classification via Improved Diffusion Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:2244-2255. [PMID: 40031019 DOI: 10.1109/tmi.2025.3530399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Recently, Denoising Diffusion Models have achieved outstanding success in generative image modeling and attracted significant attention in the computer vision community. Although a substantial amount of diffusion-based research has focused on generative tasks, few studies apply diffusion models to medical diagnosis. In this paper, we propose a diffusion-based network (named DiffMIC-v2) to address general medical image classification by eliminating unexpected noise and perturbations in image representations. To achieve this goal, we first devise an improved dual-conditional guidance strategy that conditions each diffusion step with multiple granularities to enhance step-wise regional attention. Furthermore, we design a novel Heterologous diffusion process that achieves efficient visual representation learning in the latent space. We evaluate the effectiveness of our DiffMIC-v2 on four medical classification tasks with different image modalities, including thoracic diseases classification on chest X-ray, placental maturity grading on ultrasound images, skin lesion classification using dermatoscopic images, and diabetic retinopathy grading using fundus images. Experimental results demonstrate that our DiffMIC-v2 outperforms state-of-the-art methods by a significant margin, which indicates the universality and effectiveness of the proposed model on multi-class and multi-label classification tasks. DiffMIC-v2 can use fewer iterations than our previous DiffMIC to obtain accurate estimations, and also achieves greater runtime efficiency with superior results. The code will be publicly available at https://github.com/scott-yjyang/DiffMICv2.
Collapse
|
2
|
Cao Y, Guan H, Qiu W, Shen L, Liu H, Tian L, Hou D, Zhang G. Quantitative detection of hepatocyte mixture based on terahertz time-domain spectroscopy using spectral image analysis methods. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2025; 326:125235. [PMID: 39368181 DOI: 10.1016/j.saa.2024.125235] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 09/27/2024] [Accepted: 09/29/2024] [Indexed: 10/07/2024]
Abstract
In recent years, terahertz (THz) technology has received widespread attention and has been leveraged to make breakthroughs in the field of bio-detection. However, studies on its application in mixtures have not yet been extensively conducted. Traditional one-dimensional (1D) spectral feature extraction methods are inefficient in terms of sensitivity and overall performance owing to spectral overlapping and distortions of a mixture. Thus, we adopted the Gramian angular field (GAF) method to map THz 1D spectra to two-dimensional (2D) images using correlation information between sequences. Image features of hepatocyte mixtures with different ratios were extracted using histogram of oriented gradients (HOGs) and gray level histograms (GLHs). A support vector regression (SVR) model was established for quantitative analysis. The method was more stable and accurate than principal component analysis (PCA) method, and RMSE and R2 values reached 0.072 and 0.932, respectively. This study enriches the algorithms of THz detection by combining the advantages of data upscaling and image processing, which is of great significance for the application of THz technology toward mixed-system detection.
Collapse
Affiliation(s)
- Yuqi Cao
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310000, China
| | - Hanxiao Guan
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310000, China
| | - Weihang Qiu
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310000, China
| | - Liran Shen
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310000, China
| | - Heng Liu
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310000, China
| | - Liangfei Tian
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310000, China.
| | - Dibo Hou
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310000, China
| | - Guangxin Zhang
- State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310000, China.
| |
Collapse
|
3
|
Jian M, Yu H. Towards reliable object representation via sparse directional patches and spatial center cues. FUNDAMENTAL RESEARCH 2025; 5:354-359. [PMID: 40166110 PMCID: PMC11955033 DOI: 10.1016/j.fmre.2023.08.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/31/2023] [Accepted: 08/03/2023] [Indexed: 04/02/2025] Open
Abstract
In the process of image understanding, the human visual system (HVS) performs multiscale analysis on various objects. HVS primarily focuses on marginally conspicuous image patches located within or around distinct objects rather than scanning the image pixels point by point. Inspired by the HVS mechanism, in this paper, we aimed to describe and exploit multiscale decomposition-based patch detection models for automatic visual feature representation and object localization in images. Our investigation into mimicking and modeling the HVS to capture conspicuous sparse patches and their spatial distribution clues makes a profound contribution to the automatic comprehension and characterization of images by machines. This study demonstrates that the sparse patch-based visual representation with spatial center cues is intrinsically tolerant to object positioning and understanding beyond object variations in spatial position, multiresolution, and chrominance, which has significant implications for many vision-based automatic object grabbing and perception applications, such as robotics, human‒machine interaction, and unmanned aerial vehicles (UAVs).
Collapse
Affiliation(s)
- Muwei Jian
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan 250014, China
| | - Hui Yu
- School of Creative Technologies, University of Portsmouth, Portsmouth 200021, UK
| |
Collapse
|
4
|
Roy R, Mazumdar S, Chowdhury AS. ADGAN: Attribute-Driven Generative Adversarial Network for Synthesis and Multiclass Classification of Pulmonary Nodules. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:2484-2495. [PMID: 35853058 DOI: 10.1109/tnnls.2022.3190331] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Lung cancer is the leading cause of cancer-related deaths worldwide. According to the American Cancer Society, early diagnosis of pulmonary nodules in computed tomography (CT) scans can improve the five-year survival rate up to 70% with proper treatment planning. In this article, we propose an attribute-driven Generative Adversarial Network (ADGAN) for synthesis and multiclass classification of Pulmonary Nodules. A self-attention U-Net (SaUN) architecture is proposed to improve the generation mechanism of the network. The generator is designed with two modules, namely, self-attention attribute module (SaAM) and a self-attention spatial module (SaSM). SaAM generates a nodule image based on given attributes whereas SaSM specifies the nodule region of the input image to be altered. A reconstruction loss along with an attention localization loss (AL) is used to produce an attention map prioritizing the nodule regions. To avoid resemblance between a generated image and a real image, we further introduce an adversarial loss containing a regularization term based on KL divergence. The discriminator part of the proposed model is designed to achieve the multiclass nodule classification task. Our proposed approach is validated over two challenging publicly available datasets, namely LIDC-IDRI and LUNGX. Exhaustive experimentation on these two datasets clearly indicate that we have achieved promising classification accuracy as compared to other state-of-the-art methods.
Collapse
|
5
|
Alshamrani K, Alshamrani HA, Alqahtani FF, Alshehri AH, Althaiban SH. Generative and Discriminative Learning for Lung X-Ray Analysis Based on Probabilistic Component Analysis. J Multidiscip Healthc 2023; 16:4039-4051. [PMID: 38116305 PMCID: PMC10728308 DOI: 10.2147/jmdh.s437445] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 11/23/2023] [Indexed: 12/21/2023] Open
Abstract
Introduction The paper presents a hybrid generative/discriminative classification method aimed at identifying abnormalities, such as cancer, in lung X-ray images. Methods The proposed method involves a generative model that performs generative embedding in Probabilistic Component Analysis (PrCA). The primary goal of PrCA is to model co-existing information within a probabilistic framework, with the intent to locate the feature vector space for X-ray data based on a defined kernel structure. A kernel-based classifier, grounded in information-theoretic principles, was employed in this study. Results The performance of the proposed method is evaluated against nearest neighbour (NN) classifiers and support vector machine (SVM) classifiers, which use a diagonal covariance matrix and incorporate normal linear and non-linear kernels, respectively. Discussion The method is found to achieve superior accuracy, offering a viable solution to the class of problems presented. Accuracy rates achieved by the kernels in the NN and SVM models were 95.02% and 92.45%, respectively, suggesting the method's competitiveness with state-of-the-art approaches.
Collapse
Affiliation(s)
- Khalaf Alshamrani
- Radiological Science Department, Najran University, Najran, Saudi Arabia
- Oncology and Metabolism Department, Medical School, University of Sheffield, Sheffield, United Kingdom
| | | | - F F Alqahtani
- Radiological Science Department, Najran University, Najran, Saudi Arabia
| | - Ali H Alshehri
- Radiological Science Department, Najran University, Najran, Saudi Arabia
| | | |
Collapse
|
6
|
Elezabi O, Guesney-Bodet S, Thomas JB. Impact of Exposure and Illumination on Texture Classification Based on Raw Spectral Filter Array Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:5443. [PMID: 37420610 DOI: 10.3390/s23125443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/02/2023] [Accepted: 06/05/2023] [Indexed: 07/09/2023]
Abstract
Spectral Filter Array cameras provide a fast and portable solution for spectral imaging. Texture classification from images captured with such a camera usually happens after a demosaicing process, which makes the classification performance rely on the quality of the demosaicing. This work investigates texture classification methods applied directly to the raw image. We trained a Convolutional Neural Network and compared its classification performance to the Local Binary Pattern method. The experiment is based on real SFA images of the objects of the HyTexiLa database and not on simulated data as are often used. We also investigate the role of integration time and illumination on the performance of the classification methods. The Convolutional Neural Network outperforms other texture classification methods even with a small amount of training data. Additionally, we demonstrated the model's ability to adapt and scale for different environmental conditions such as illumination and exposure compared to other methods. In order to explain these results, we analyze the extracted features of our method and show the ability of the model to recognize different shapes, patterns, and marks in different textures.
Collapse
Affiliation(s)
- Omar Elezabi
- Colourlab, Department of Computer Science, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway
| | - Sebastien Guesney-Bodet
- Colourlab, Department of Computer Science, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway
| | - Jean-Baptiste Thomas
- Colourlab, Department of Computer Science, Norwegian University of Science and Technology (NTNU), 2815 Gjøvik, Norway
| |
Collapse
|
7
|
Yuan H, Wu Y, Dai M. Multi-Modal Feature Fusion-Based Multi-Branch Classification Network for Pulmonary Nodule Malignancy Suspiciousness Diagnosis. J Digit Imaging 2023; 36:617-626. [PMID: 36478311 PMCID: PMC10039149 DOI: 10.1007/s10278-022-00747-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 09/28/2022] [Accepted: 11/27/2022] [Indexed: 12/13/2022] Open
Abstract
Detecting and identifying malignant nodules on chest computed tomography (CT) plays an important role in the early diagnosis and timely treatment of lung cancer, which can greatly reduce the number of deaths worldwide. In view of the existing methods in pulmonary nodule diagnosis, the importance of clinical radiological structured data (laboratory examination, radiological data) is ignored for the accuracy judgment of patients' condition. Hence, a multi-modal fusion multi-branch classification network is constructed to detect and classify pulmonary nodules in this work: (1) Radiological data of pulmonary nodules are used to construct structured features of length 9. (2) A multi-branch fusion-based effective attention mechanism network is designed for 3D CT Patch unstructured data, which uses 3D ECA-ResNet to dynamically adjust the extracted features. In addition, feature maps with different receptive fields from multi-layer are fully fused to obtain representative multi-scale unstructured features. (3) Multi-modal feature fusion of structured data and unstructured data is performed to distinguish benign and malignant nodules. Numerous experimental results show that this advanced network can effectively classify the benign and malignant pulmonary nodules for clinical diagnosis, which achieves the highest accuracy (94.89%), sensitivity (94.91%), and F1-score (94.65%) and lowest false positive rate (5.55%).
Collapse
Affiliation(s)
- Haiying Yuan
- Beijing University of Technology, Beijing, China.
| | - Yanrui Wu
- Beijing University of Technology, Beijing, China
| | - Mengfan Dai
- Beijing University of Technology, Beijing, China
| |
Collapse
|
8
|
An efficient lightweight convolutional neural network for industrial surface defect detection. Artif Intell Rev 2023. [DOI: 10.1007/s10462-023-10438-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
9
|
Deep multi-scale resemblance network for the sub-class differentiation of adrenal masses on computed tomography images. Artif Intell Med 2022; 132:102374. [DOI: 10.1016/j.artmed.2022.102374] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 03/23/2022] [Accepted: 04/22/2022] [Indexed: 11/21/2022]
|
10
|
Yang Y, Hu Y, Zhang X, Wang S. Two-Stage Selective Ensemble of CNN via Deep Tree Training for Medical Image Classification. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:9194-9207. [PMID: 33705343 DOI: 10.1109/tcyb.2021.3061147] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Medical image classification is an important task in computer-aided diagnosis systems. Its performance is critically determined by the descriptiveness and discriminative power of features extracted from images. With rapid development of deep learning, deep convolutional neural networks (CNNs) have been widely used to learn the optimal high-level features from the raw pixels of images for a given classification task. However, due to the limited amount of labeled medical images with certain quality distortions, such techniques crucially suffer from the training difficulties, including overfitting, local optimums, and vanishing gradients. To solve these problems, in this article, we propose a two-stage selective ensemble of CNN branches via a novel training strategy called deep tree training (DTT). In our approach, DTT is adopted to jointly train a series of networks constructed from the hidden layers of CNN in a hierarchical manner, leading to the advantage that vanishing gradients can be mitigated by supplementing gradients for hidden layers of CNN, and intrinsically obtain the base classifiers on the middle-level features with minimum computation burden for an ensemble solution. Moreover, the CNN branches as base learners are combined into the optimal classifier via the proposed two-stage selective ensemble approach based on both accuracy and diversity criteria. Extensive experiments on CIFAR-10 benchmark and two specific medical image datasets illustrate that our approach achieves better performance in terms of accuracy, sensitivity, specificity, and F1 score measurement.
Collapse
|
11
|
Fahmy D, Kandil H, Khelifi A, Yaghi M, Ghazal M, Sharafeldeen A, Mahmoud A, El-Baz A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers (Basel) 2022; 14:cancers14071840. [PMID: 35406614 PMCID: PMC8997734 DOI: 10.3390/cancers14071840] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI. Abstract Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
Collapse
Affiliation(s)
- Dalia Fahmy
- Diagnostic Radiology Department, Mansoura University Hospital, Mansoura 35516, Egypt;
| | - Heba Kandil
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Information Technology Department, Faculty of Computers and Informatics, Mansoura University, Mansoura 35516, Egypt
| | - Adel Khelifi
- Computer Science and Information Technology Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Maha Yaghi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Correspondence:
| |
Collapse
|
12
|
Natalia F, Young JC, Afriliana N, Meidia H, Yunus RE, Sudirman S. Automated selection of mid-height intervertebral disc slice in traverse lumbar spine MRI using a combination of deep learning feature and machine learning classifier. PLoS One 2022; 17:e0261659. [PMID: 35025904 PMCID: PMC8758114 DOI: 10.1371/journal.pone.0261659] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 12/07/2021] [Indexed: 11/18/2022] Open
Abstract
Abnormalities and defects that can cause lumbar spinal stenosis often occur in the Intervertebral Disc (IVD) of the patient’s lumbar spine. Their automatic detection and classification require an application of an image analysis algorithm on suitable images, such as mid-sagittal images or traverse mid-height intervertebral disc slices, as inputs. Hence the process of selecting and separating these images from other medical images in the patient’s set of scans is necessary. However, the technological progress in making this process automated is still lagging behind other areas in medical image classification research. In this paper, we report the result of our investigation on the suitability and performance of different approaches of machine learning to automatically select the best traverse plane that cuts closest to the half-height of an IVD from a database of lumbar spine MRI images. This study considers images features extracted using eleven different pre-trained Deep Convolution Neural Network (DCNN) models. We investigate the effectiveness of three dimensionality-reduction techniques and three feature-selection techniques on the classification performance. We also investigate the performance of five different Machine Learning (ML) algorithms and three Fully Connected (FC) neural network learning optimizers which are used to train an image classifier with hyperparameter optimization using a wide range of hyperparameter options and values. The different combinations of methods are tested on a publicly available lumbar spine MRI dataset consisting of MRI studies of 515 patients with symptomatic back pain. Our experiment shows that applying the Support Vector Machine algorithm with a short Gaussian kernel on full-length image features extracted using a pre-trained DenseNet201 model is the best approach to use. This approach gives the minimum per-class classification performance of around 0.88 when measured using the precision and recall metrics. The median performance measured using the precision metric ranges from 0.95 to 0.99 whereas that using the recall metric ranges from 0.93 to 1.0. When only considering the L3/L4, L4/L5, and L5/S1 classes, the minimum F1-Scores range between 0.93 to 0.95, whereas the median F1-Scores range between 0.97 to 0.99.
Collapse
Affiliation(s)
- Friska Natalia
- Faculty of Engineering and Informatics, Universitas Multimedia Nusantara, Serpong, Indonesia
| | - Julio Christian Young
- Faculty of Engineering and Informatics, Universitas Multimedia Nusantara, Serpong, Indonesia
| | - Nunik Afriliana
- Faculty of Engineering and Informatics, Universitas Multimedia Nusantara, Serpong, Indonesia
| | - Hira Meidia
- Faculty of Engineering and Informatics, Universitas Multimedia Nusantara, Serpong, Indonesia
| | | | - Sud Sudirman
- School of Computer Science and Mathematics, Liverpool John Moores University, Liverpool, United Kingdom
- * E-mail:
| |
Collapse
|
13
|
Two-Stage Hybrid Approach of Deep Learning Networks for Interstitial Lung Disease Classification. BIOMED RESEARCH INTERNATIONAL 2022; 2022:7340902. [PMID: 35155680 PMCID: PMC8826206 DOI: 10.1155/2022/7340902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 01/14/2022] [Accepted: 01/21/2022] [Indexed: 11/18/2022]
Abstract
High-resolution computed tomography (HRCT) images in interstitial lung disease (ILD) screening can help improve healthcare quality. However, most of the earlier ILD classification work involves time-consuming manual identification of the region of interest (ROI) from the lung HRCT image before applying the deep learning classification algorithm. This paper has developed a two-stage hybrid approach of deep learning networks for ILD classification. A conditional generative adversarial network (c-GAN) has segmented the lung part from the HRCT images at the first stage. The c-GAN with multiscale feature extraction module has been used for accurate lung segmentation from the HRCT images with lung abnormalities. At the second stage, a pretrained ResNet50 has been used to extract the features from the segmented lung image for classification into six ILD classes using the support vector machine classifier. The proposed two-stage algorithm takes a whole HRCT as input eliminating the need for extracting the ROI and classifies the given HRCT image into an ILD class. The performance of the proposed two-stage deep learning network-based ILD classifier has improved considerably due to the stage-wise improvement of deep learning algorithm performance.
Collapse
|
14
|
Kumar A, Dhara AK, Thakur SB, Sadhu A, Nandi D. Special Convolutional Neural Network for Identification and Positioning of Interstitial Lung Disease Patterns in Computed Tomography Images. PATTERN RECOGNITION AND IMAGE ANALYSIS 2021. [PMCID: PMC8711684 DOI: 10.1134/s1054661821040027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In this paper, automated detection of interstitial lung disease patterns in high resolution computed tomography images is achieved by developing a faster region-based convolutional network based detector with GoogLeNet as a backbone. GoogLeNet is simplified by removing few inception models and used as the backbone of the detector network. The proposed framework is developed to detect several interstitial lung disease patterns without doing lung field segmentation. The proposed method is able to detect the five most prevalent interstitial lung disease patterns: fibrosis, emphysema, consolidation, micronodules and ground-glass opacity, as well as normal. Five-fold cross-validation has been used to avoid bias and reduce over-fitting. The proposed framework performance is measured in terms of F-score on the publicly available MedGIFT database. It outperforms state-of-the-art techniques. The detection is performed at slice level and could be used for screening and differential diagnosis of interstitial lung disease patterns using high resolution computed tomography images.
Collapse
Affiliation(s)
- Abhishek Kumar
- School of Computer and Information Sciences University of Hyderabad, 500046 Hyderabad, India
| | - Ashis Kumar Dhara
- Electrical Engineering National Institute of Technology, 713209 Durgapur, India
| | - Sumitra Basu Thakur
- Department of Chest and Respiratory Care Medicine, Medical College, 700073 Kolkata, India
| | - Anup Sadhu
- EKO Diagnostic, Medical College, 700073 Kolkata, India
| | - Debashis Nandi
- Computer Science and Engineering National Institute of Technology, 713209 Durgapur, India
| |
Collapse
|
15
|
Li P, Kong X, Li J, Zhu G, Lu X, Shen P, Shah SAA, Bennamoun M, Hua T. A Dataset of Pulmonary Lesions With Multiple-Level Attributes and Fine Contours. Front Digit Health 2021; 2:609349. [PMID: 34713070 PMCID: PMC8521952 DOI: 10.3389/fdgth.2020.609349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 12/09/2020] [Indexed: 11/13/2022] Open
Abstract
Lung cancer is a life-threatening disease and its diagnosis is of great significance. Data scarcity and unavailability of datasets is a major bottleneck in lung cancer research. In this paper, we introduce a dataset of pulmonary lesions for designing the computer-aided diagnosis (CAD) systems. The dataset has fine contour annotations and nine attribute annotations. We define the structure of the dataset in detail, and then discuss the relationship of the attributes and pathology, and the correlation between the nine attributes with the chi-square test. To demonstrate the contribution of our dataset to computer-aided system design, we define four tasks that can be developed using our dataset. Then, we use our dataset to model multi-attribute classification tasks. We discuss the performance in 2D, 2.5D, and 3D input modes of the classification model. To improve performance, we introduce two attention mechanisms and verify the principles of the attention mechanisms through visualization. Experimental results show the relationship between different models and different levels of attributes.
Collapse
Affiliation(s)
- Ping Li
- Shanghai BNC, Shanghai, China
| | - Xiangwen Kong
- Embedded Technology & Vision Processing Research Center, Xidian University, Xi'an, China
| | - Johann Li
- Embedded Technology & Vision Processing Research Center, Xidian University, Xi'an, China
| | - Guangming Zhu
- Embedded Technology & Vision Processing Research Center, Xidian University, Xi'an, China
| | | | | | - Syed Afaq Ali Shah
- College of Science, Health, Engineering and Education, Murdoch University, Perth, WA, Australia
| | - Mohammed Bennamoun
- School of Computer Science and Software Engineering, The University of Western Australia, Perth, WA, Australia
| | - Tao Hua
- Pet Center, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
16
|
Pawar SP, Talbar SN. LungSeg-Net: Lung field segmentation using generative adversarial network. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102296] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|
17
|
Tang H, Mao L, Zeng S, Deng S, Ai Z. Discriminative dictionary learning algorithm with pairwise local constraints for histopathological image classification. Med Biol Eng Comput 2021; 59:153-164. [PMID: 33386592 DOI: 10.1007/s11517-020-02281-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 10/22/2020] [Indexed: 10/22/2022]
Abstract
Histopathological image contains rich pathological information that is valued for the aided diagnosis of many diseases such as cancer. An important issue in histopathological image classification is how to learn a high-quality discriminative dictionary due to diverse tissue pattern, a variety of texture, and different morphologies structure. In this paper, we propose a discriminative dictionary learning algorithm with pairwise local constraints (PLCDDL) for histopathological image classification. Inspired by the one-to-one mapping between dictionary atom and profile, we learn a pair of discriminative graph Laplacian matrices that are less sensitive to noise or outliers to capture the locality and discriminating information of data manifold by utilizing the local geometry information of category-specific dictionaries rather than input data. Furthermore, graph-based pairwise local constraints are designed and incorporated into the original dictionary learning model to effectively encode the locality consistency with intra-class samples and the locality inconsistency with inter-class samples. Specifically, we learn the discriminative localities for representations by jointly optimizing both the intra-class locality and inter-class locality, which can significantly improve the discriminability and robustness of dictionary. Extensive experiments on the challenging datasets verify that the proposed PLCDDL algorithm can achieve a better classification accuracy and powerful robustness compared with the state-of-the-art dictionary learning methods. Graphical abstract The proposed PLCDDL algorithm. 1) A pair of graph Laplacian matrices are first learned based on the class-specific dictionaries. 2) Graph-based pairwise local constraints are designed to transfer the locality for coding coefficients. 3) Class-specific dictionaries can be further updated.
Collapse
Affiliation(s)
- Hongzhong Tang
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China. .,College of Automation and Electronic Information, Xiangtan University, Xiangtan, Hunan, People's Republic of China. .,Key Laboratory of Intelligent Computing & Information Processing of Ministry of Education, Xiangtan University, Xiangtan, Hunan, People's Republic of China.
| | - Lizhen Mao
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China
| | - Shuying Zeng
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China
| | - Shijun Deng
- Hunan Provincial Key Laboratory of Intelligent Information Processing and Application, Hengyang, People's Republic of China.,College of Automation and Electronic Information, Xiangtan University, Xiangtan, Hunan, People's Republic of China
| | - Zhaoyang Ai
- Institute of Biophysics Linguistics, College of Foreign Languages, Hunan University, Changsha, Hunan, People's Republic of China
| |
Collapse
|
18
|
Abstract
The interest in artificial intelligence (AI) has ballooned within radiology in the past few years primarily due to notable successes of deep learning. With the advances brought by deep learning, AI has the potential to recognize and localize complex patterns from different radiological imaging modalities, many of which even achieve comparable performance to human decision-making in recent applications. In this chapter, we review several AI applications in radiology for different anatomies: chest, abdomen, pelvis, as well as general lesion detection/identification that is not limited to specific anatomies. For each anatomy site, we focus on introducing the tasks of detection, segmentation, and classification with an emphasis on describing the technology development pathway with the aim of providing the reader with an understanding of what AI can do in radiology and what still needs to be done for AI to better fit in radiology. Combining with our own research experience of AI in medicine, we elaborate how AI can enrich knowledge discovery, understanding, and decision-making in radiology, rather than replacing the radiologist.
Collapse
|
19
|
Tan W, Huang P, Li X, Ren G, Chen Y, Yang J. Analysis of segmentation of lung parenchyma based on deep learning methods. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2021; 29:945-959. [PMID: 34487013 DOI: 10.3233/xst-210956] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Precise segmentation of lung parenchyma is essential for effective analysis of the lung. Due to the obvious contrast and large regional area compared to other tissues in the chest, lung tissue is less difficult to segment. Special attention to details of lung segmentation is also needed. To improve the quality and speed of segmentation of lung parenchyma based on computed tomography (CT) or computed tomography angiography (CTA) images, the 4th International Symposium on Image Computing and Digital Medicine (ISICDM 2020) provides interesting and valuable research ideas and approaches. For the work of lung parenchyma segmentation, 9 of the 12 participating teams used the U-Net network or its modified forms, and others used the methods to improve the segmentation accuracy include attention mechanism, multi-scale feature information fusion. Among them, U-Net achieves the best results including that the final dice coefficient of CT segmentation is 0.991 and the final dice coefficient of CTA segmentation is 0.984. In addition, attention U-Net and nnU-Net network also performs well. In this paper, the methods chosen by 12 teams from different research groups are evaluated and their segmentation results are analyzed for the study and references to those involved.
Collapse
Affiliation(s)
- Wenjun Tan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Peifang Huang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Xiaoshuo Li
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Genqiang Ren
- College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Yufei Chen
- College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
- College of Computer Science and Engineering, Northeastern University, Shenyang, China
| |
Collapse
|
20
|
Chassagnon G, Vakalopoulou M, Régent A, Zacharaki EI, Aviram G, Martin C, Marini R, Bus N, Jerjir N, Mekinian A, Hua-Huy T, Monnier-Cholley L, Benmostefa N, Mouthon L, Dinh-Xuan AT, Paragios N, Revel MP. Deep Learning-based Approach for Automated Assessment of Interstitial Lung Disease in Systemic Sclerosis on CT Images. Radiol Artif Intell 2020; 2:e190006. [PMID: 33937829 DOI: 10.1148/ryai.2020190006] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2019] [Revised: 03/19/2020] [Accepted: 03/31/2020] [Indexed: 12/23/2022]
Abstract
Purpose To develop a deep learning algorithm for the automatic assessment of the extent of systemic sclerosis (SSc)-related interstitial lung disease (ILD) on chest CT images. Materials and Methods This retrospective study included 208 patients with SSc (median age, 57 years; 167 women) evaluated between January 2009 and October 2017. A multicomponent deep neural network (AtlasNet) was trained on 6888 fully annotated CT images (80% for training and 20% for validation) from 17 patients with no, mild, or severe lung disease. The model was tested on a dataset of 400 images from another 20 patients, independently partially annotated by three radiologist readers. The ILD contours from the three readers and the deep learning neural network were compared by using the Dice similarity coefficient (DSC). The correlation between disease extent obtained from the deep learning algorithm and that obtained by using pulmonary function tests (PFTs) was then evaluated in the remaining 171 patients and in an external validation dataset of 31 patients based on the analysis of all slices of the chest CT scan. The Spearman rank correlation coefficient (ρ) was calculated to evaluate the correlation between disease extent and PFT results. Results The median DSCs between the readers and the deep learning ILD contours ranged from 0.74 to 0.75, whereas the median DSCs between contours from radiologists ranged from 0.68 to 0.71. The disease extent obtained from the algorithm, by analyzing the whole CT scan, correlated with the diffusion lung capacity for carbon monoxide, total lung capacity, and forced vital capacity (ρ = -0.76, -0.70, and -0.62, respectively; P < .001 for all) in the dataset for the correlation with PFT results. The disease extents correlated with diffusion lung capacity for carbon monoxide, total lung capacity, and forced vital capacity were ρ = -0.65, -0.70, and -0.57, respectively, in the external validation dataset (P < .001 for all). Conclusion The developed algorithm performed similarly to radiologists for disease-extent contouring, which correlated with pulmonary function to assess CT images from patients with SSc-related ILD.Supplemental material is available for this article.© RSNA, 2020.
Collapse
Affiliation(s)
- Guillaume Chassagnon
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Maria Vakalopoulou
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Alexis Régent
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Evangelia I Zacharaki
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Galit Aviram
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Charlotte Martin
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Rafael Marini
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Norbert Bus
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Naïm Jerjir
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Arsène Mekinian
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Thông Hua-Huy
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Laurence Monnier-Cholley
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Nouria Benmostefa
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Luc Mouthon
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Anh-Tuan Dinh-Xuan
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Nikos Paragios
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| | - Marie-Pierre Revel
- Departments of Radiology (G.C., N.J., M.P.R.) and Physiology (T.H.H., A.T.D.X.), Hôpital Cochin, and Reference Center for Rare Systemic Autoimmune Diseases of Ile de France, Hôpital Cochin (A.R., N. Benmostefa, L.M.), Assistance Publique-Hôpitaux de Paris, Université de Paris, 27 Rue du Faubourg Saint-Jacques, 75014 Paris, France; Center for Visual Computing, Ecole CentraleSupelec, Gif-sur-Yvette, France (G.C., M.V., E.I.Z., C.M., N.P.); Department of Radiology, Tel Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel (G.A.); TheraPanacea, Paris, France (R.M., N. Bus, N.P.); and Departments of Internal Medicine and Inflammatory Disorders (A.M.) and Radiology (L.M.C.), Hôpital Saint-Antoine, Assistance Publique-Hôpitaux de Paris, Sorbonne Université, Paris, France
| |
Collapse
|
21
|
Jiang H, Gao F, Xu X, Huang F, Zhu S. Attentive and ensemble 3D dual path networks for pulmonary nodules classification. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.03.103] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
22
|
Classification of Lung Nodules Based on Deep Residual Networks and Migration Learning. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:8975078. [PMID: 32318102 PMCID: PMC7149413 DOI: 10.1155/2020/8975078] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 01/30/2020] [Accepted: 02/12/2020] [Indexed: 01/22/2023]
Abstract
The classification process of lung nodule detection in a traditional computer-aided detection (CAD) system is complex, and the classification result is heavily dependent on the performance of each step in lung nodule detection, causing low classification accuracy and high false positive rate. In order to alleviate these issues, a lung nodule classification method based on a deep residual network is proposed. Abandoning traditional image processing methods and taking the 50-layer ResNet network structure as the initial model, the deep residual network is constructed by combining residual learning and migration learning. The proposed approach is verified by conducting experiments on the lung computed tomography (CT) images from the publicly available LIDC-IDRI database. An average accuracy of 98.23% and a false positive rate of 1.65% are obtained based on the ten-fold cross-validation method. Compared with the conventional support vector machine (SVM)-based CAD system, the accuracy of our method improved by 9.96% and the false positive rate decreased by 6.95%, while the accuracy improved by 1.75% and 2.42%, respectively, and the false positive rate decreased by 2.07% and 2.22%, respectively, in contrast to the VGG19 model and InceptionV3 convolutional neural networks. The experimental results demonstrate the effectiveness of our proposed method in lung nodule classification for CT images.
Collapse
|
23
|
Abstract
Current research on computer-aided diagnosis (CAD) of liver cancer is based on traditional feature engineering methods, which have several drawbacks including redundant features and high computational cost. Recent deep learning models overcome these problems by implicitly capturing intricate structures from large-scale medical image data. However, they are still affected by network hyperparameters and topology. Hence, the state of the art in this area can be further optimized by integrating bio-inspired concepts into deep learning models. This work proposes a novel bio-inspired deep learning approach for optimizing predictive results of liver cancer. This approach contributes to the literature in two ways. Firstly, a novel hybrid segmentation algorithm is proposed to extract liver lesions from computed tomography (CT) images using SegNet network, UNet network, and artificial bee colony optimization (ABC), namely, SegNet-UNet-ABC. This algorithm uses the SegNet for separating liver from the abdominal CT scan, then the UNet is used to extract lesions from the liver. In parallel, the ABC algorithm is hybridized with each network to tune its hyperparameters, as they highly affect the segmentation performance. Secondly, a hybrid algorithm of the LeNet-5 model and ABC algorithm, namely, LeNet-5/ABC, is proposed as feature extractor and classifier of liver lesions. The LeNet-5/ABC algorithm uses the ABC to select the optimal topology for constructing the LeNet-5 network, as network structure affects learning time and classification accuracy. For assessing performance of the two proposed algorithms, comparisons have been made to the state-of-the-art algorithms on liver lesion segmentation and classification. The results reveal that the SegNet-UNet-ABC is superior to other compared algorithms regarding Jaccard index, Dice index, correlation coefficient, and convergence time. Moreover, the LeNet-5/ABC algorithm outperforms other algorithms regarding specificity, F1-score, accuracy, and computational time.
Collapse
|
24
|
Ebner L, Christodoulidis S, Stathopoulou T, Geiser T, Stalder O, Limacher A, Heverhagen JT, Mougiakakou SG, Christe A. Meta-analysis of the radiological and clinical features of Usual Interstitial Pneumonia (UIP) and Nonspecific Interstitial Pneumonia (NSIP). PLoS One 2020; 15:e0226084. [PMID: 31929532 PMCID: PMC6957301 DOI: 10.1371/journal.pone.0226084] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2019] [Accepted: 11/18/2019] [Indexed: 02/02/2023] Open
Abstract
PURPOSE To conduct a meta-analysis to determine specific computed tomography (CT) patterns and clinical features that discriminate between nonspecific interstitial pneumonia (NSIP) and usual interstitial pneumonia (UIP). MATERIALS AND METHODS The PubMed/Medline and Embase databases were searched for studies describing the radiological patterns of UIP and NSIP in chest CT images. Only studies involving histologically confirmed diagnoses and a consensus diagnosis by an interstitial lung disease (ILD) board were included in this analysis. The radiological patterns and patient demographics were extracted from suitable articles. We used random-effects meta-analysis by DerSimonian & Laird and calculated pooled odds ratios for binary data and pooled mean differences for continuous data. RESULTS Of the 794 search results, 33 articles describing 2,318 patients met the inclusion criteria. Twelve of these studies included both NSIP (338 patients) and UIP (447 patients). NSIP-patients were significantly younger (NSIP: median age 54.8 years, UIP: 59.7 years; mean difference (MD) -4.4; p = 0.001; 95% CI: -6.97 to -1.77), less often male (NSIP: median 52.8%, UIP: 73.6%; pooled odds ratio (OR) 0.32; p<0.001; 95% CI: 0.17 to 0.60), and less often smokers (NSIP: median 55.1%, UIP: 73.9%; OR 0.42; p = 0.005; 95% CI: 0.23 to 0.77) than patients with UIP. The CT findings from patients with NSIP revealed significantly lower levels of the honeycombing pattern (NSIP: median 28.9%, UIP: 73.4%; OR 0.07; p<0.001; 95% CI: 0.02 to 0.30) with less peripheral predominance (NSIP: median 41.8%, UIP: 83.3%; OR 0.21; p<0.001; 95% CI: 0.11 to 0.38) and more subpleural sparing (NSIP: median 40.7%, UIP: 4.3%; OR 16.3; p = 0.005; 95% CI: 2.28 to 117). CONCLUSION Honeycombing with a peripheral predominance was significantly associated with a diagnosis of UIP. The NSIP pattern showed more subpleural sparing. The UIP pattern was predominantly observed in elderly males with a history of smoking, whereas NSIP occurred in a younger patient population.
Collapse
Affiliation(s)
- Lukas Ebner
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | | | - Thomai Stathopoulou
- ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Thomas Geiser
- Department for Pulmonary Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Odile Stalder
- CTU Bern and Institute of Social and Preventive Medicine (ISPM), University of Bern, Switzerland
| | - Andreas Limacher
- CTU Bern and Institute of Social and Preventive Medicine (ISPM), University of Bern, Switzerland
| | - Johannes T. Heverhagen
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Stavroula G. Mougiakakou
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
- ARTORG Center for Biomedical Engineering Research, University of Bern, Switzerland
| | - Andreas Christe
- Department of Diagnostic, Interventional and Pediatric Radiology, Inselspital, Bern University Hospital, University of Bern, Switzerland
| |
Collapse
|
25
|
Choudhary P, Hazra A. Chest disease radiography in twofold: using convolutional neural networks and transfer learning. EVOLVING SYSTEMS 2019. [DOI: 10.1007/s12530-019-09316-2] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
26
|
Xu R, Cong Z, Ye X, Hirano Y, Kido S, Gyobu T, Kawata Y, Honda O, Tomiyama N. Pulmonary Textures Classification via a Multi-Scale Attention Network. IEEE J Biomed Health Inform 2019; 24:2041-2052. [PMID: 31689221 DOI: 10.1109/jbhi.2019.2950006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Precise classification of pulmonary textures is crucial to develop a computer aided diagnosis (CAD) system of diffuse lung diseases (DLDs). Although deep learning techniques have been applied to this task, the classification performance is not satisfied for clinical requirements, since commonly-used deep networks built by stacking convolutional blocks are not able to learn discriminative feature representation to distinguish complex pulmonary textures. For addressing this problem, we design a multi-scale attention network (MSAN) architecture comprised by several stacked residual attention modules followed by a multi-scale fusion module. Our deep network can not only exploit powerful information on different scales but also automatically select optimal features for more discriminative feature representation. Besides, we develop visualization techniques to make the proposed deep model transparent for humans. The proposed method is evaluated by using a large dataset. Experimental results show that our method has achieved the average classification accuracy of 94.78% and the average f-value of 0.9475 in the classification of 7 categories of pulmonary textures. Besides, visualization results intuitively explain the working behavior of the deep network. The proposed method has achieved the state-of-the-art performance to classify pulmonary textures on high resolution CT images.
Collapse
|
27
|
Cui H, Wang X, Bian Y, Song S, Feng DD. Ischemic stroke clinical outcome prediction based on image signature selection from multimodality data. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:722-725. [PMID: 30440498 DOI: 10.1109/embc.2018.8512291] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Quantitative models are essential in precision medicine that can be used to predict health status and prevent disease and disability. Current radiomics models for clinical outcome prediction often depend on huge amount of image features and may include redundant information and ignore individual feature importance. In this work, we propose a prognostic discrimination ranking strategy to select the most relevant image features for image assisted clinical outcome prediction. Firstly, a redundancy and prognostic discrimination evaluation method is proposed to evaluate and rank a large number of features extracted from images. Secondly, forward sequential feature selection is performed to select the top ranked relevant features in each discriminate quantization. Finally, representative vectors are generated by the fusion of pivotal clinical parameters and selected image signatures to be fed into a classification model. The proposed model was trained and tested over 70 patient studies with six MR sequences and four clinical parameters from ISLES challenges. The evaluations using ROC curves demonstrated the improved performance over five other feature selection models where the proposed model achieved AUCs of 0.821, 0.968, 0.983, 0.896 and 1 when predicting five clinical outcome scores respectively.
Collapse
|
28
|
Zhang S, Han F, Liang Z, Tan J, Cao W, Gao Y, Pomeroy M, Ng K, Hou W. An investigation of CNN models for differentiating malignant from benign lesions using small pathologically proven datasets. Comput Med Imaging Graph 2019; 77:101645. [PMID: 31454710 DOI: 10.1016/j.compmedimag.2019.101645] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2019] [Revised: 07/02/2019] [Accepted: 08/01/2019] [Indexed: 12/14/2022]
Abstract
Cancer has been one of the most threatening diseases to human health. There have been many efforts devoted to the advancement of radiology and transformative tools (e.g. non-invasive computed tomographic or CT imaging) to detect cancer in early stages. One of the major goals is to identify malignant from benign lesions. In recent years, machine deep learning (DL), e.g. convolutional neural network (CNN), has shown encouraging classification performance on medical images. However, DL algorithms always need large datasets with ground truth. Yet in the medical imaging field, especially for cancer imaging, it is difficult to collect such large volume of images with pathological information. Therefore, strategies are needed to learn effectively from small datasets via CNN models. To forward that goal, this paper explores two CNN models by focusing extensively on expansion of training samples from two small pathologically proven datasets (colorectal polyp dataset and lung nodule dataset) and then differentiating malignant from benign lesions. Experimental outcomes indicate that even in very small datasets of less than 70 subjects, malignance can be successfully differentiated from benign via the proposed CNN models, the average AUCs (area under the receiver operating curve) of differentiating colorectal polyps and pulmonary nodules are 0.86 and 0.71, respectively. Our experiments further demonstrate that for these two small datasets, instead of only studying the original raw CT images, feeding additional image features, such as the local binary pattern of the lesions, into the CNN models can significantly improve classification performance. In addition, we find that our explored voxel level CNN model has better performance when facing the small and unbalanced datasets.
Collapse
Affiliation(s)
- Shu Zhang
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794 USA
| | - Fangfang Han
- Northeastern University, Shenyang, Liaoning, 110819 PR China
| | - Zhengrong Liang
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794 USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794 USA; Department of Electrical & Computer Engineering, Stony Brook University, Stony Brook, NY, 11794 USA.
| | - Jiaxing Tan
- Department of Computer Science, City University of New York, the Graduate Center, NY, 10016 USA
| | - Weiguo Cao
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794 USA
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY, 11794 USA
| | - Marc Pomeroy
- Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY, 11794 USA
| | - Kenneth Ng
- Department of Electrical & Computer Engineering, Stony Brook University, Stony Brook, NY, 11794 USA
| | - Wei Hou
- Department of Preventive Medicine, Stony Brook University, Stony Brook, NY, 11794 USA
| |
Collapse
|
29
|
Joyseeree R, Otálora S, Müller H, Depeursinge A. Fusing learned representations from Riesz Filters and Deep CNN for lung tissue classification. Med Image Anal 2019; 56:172-183. [DOI: 10.1016/j.media.2019.06.006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2018] [Revised: 12/23/2018] [Accepted: 06/11/2019] [Indexed: 10/26/2022]
|
30
|
Medical image classification using synergic deep learning. Med Image Anal 2019; 54:10-19. [DOI: 10.1016/j.media.2019.02.010] [Citation(s) in RCA: 152] [Impact Index Per Article: 25.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 01/21/2019] [Accepted: 02/15/2019] [Indexed: 02/07/2023]
|
31
|
Yan K, Wang X, Kim J, Khadra M, Fulham M, Feng D. A propagation-DNN: Deep combination learning of multi-level features for MR prostate segmentation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2019; 170:11-21. [PMID: 30712600 DOI: 10.1016/j.cmpb.2018.12.031] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2018] [Revised: 12/13/2018] [Accepted: 12/28/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Prostate segmentation on Magnetic Resonance (MR) imaging is problematic because disease changes the shape and boundaries of the gland and it can be difficult to separate the prostate from surrounding tissues. We propose an automated model that extracts and combines multi-level features in a deep neural network to segment prostate on MR images. METHODS Our proposed model, the Propagation Deep Neural Network (P-DNN), incorporates the optimal combination of multi-level feature extraction as a single model. High level features from the convolved data using DNN are extracted for prostate localization and shape recognition, while labeling propagation, by low level cues, is embedded into a deep layer to delineate the prostate boundary. RESULTS A well-recognized benchmarking dataset (50 training data and 30 testing data from patients) was used to evaluate the P-DNN. When compared it to existing DNN methods, the P-DNN statistically outperformed the baseline DNN models with an average improvement in the DSC of 3.19%. When compared to the state-of-the-art non-DNN prostate segmentation methods, P-DNN was competitive by achieving 89.9 ± 2.8% DSC and 6.84 ± 2.5 mm HD on training sets and 84.13 ± 5.18% DSC and 9.74 ± 4.21 mm HD on testing sets. CONCLUSION Our results show that P-DNN maximizes multi-level feature extraction for prostate segmentation of MR images.
Collapse
Affiliation(s)
- Ke Yan
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| | - Xiuying Wang
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia.
| | - Jinman Kim
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| | - Mohamed Khadra
- Department of Urology, Nepean Hospital, Kingswood, Australia
| | - Michael Fulham
- Department of Molecular Imaging, Royal Prince Alfred Hospital, Sydney, Australia
| | - Dagan Feng
- Biomedical and Multimedia Information Technology Research Group, School of Computer Science, University of Sydney, Sydney, Australia
| |
Collapse
|
32
|
Xu M, Qi S, Yue Y, Teng Y, Xu L, Yao Y, Qian W. Segmentation of lung parenchyma in CT images using CNN trained with the clustering algorithm generated dataset. Biomed Eng Online 2019; 18:2. [PMID: 30602393 PMCID: PMC6317251 DOI: 10.1186/s12938-018-0619-9] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 12/19/2018] [Indexed: 11/24/2022] Open
Abstract
Background Lung segmentation constitutes a critical procedure for any clinical-decision supporting system aimed to improve the early diagnosis and treatment of lung diseases. Abnormal lungs mainly include lung parenchyma with commonalities on CT images across subjects, diseases and CT scanners, and lung lesions presenting various appearances. Segmentation of lung parenchyma can help locate and analyze the neighboring lesions, but is not well studied in the framework of machine learning. Methods We proposed to segment lung parenchyma using a convolutional neural network (CNN) model. To reduce the workload of manually preparing the dataset for training the CNN, one clustering algorithm based method is proposed firstly. Specifically, after splitting CT slices into image patches, the k-means clustering algorithm with two categories is performed twice using the mean and minimum intensity of image patch, respectively. A cross-shaped verification, a volume intersection, a connected component analysis and a patch expansion are followed to generate final dataset. Secondly, we design a CNN architecture consisting of only one convolutional layer with six kernels, followed by one maximum pooling layer and two fully connected layers. Using the generated dataset, a variety of CNN models are trained and optimized, and their performances are evaluated by eightfold cross-validation. A separate validation experiment is further conducted using a dataset of 201 subjects (4.62 billion patches) with lung cancer or chronic obstructive pulmonary disease, scanned by CT or PET/CT. The segmentation results by our method are compared with those yielded by manual segmentation and some available methods. Results A total of 121,728 patches are generated to train and validate the CNN models. After the parameter optimization, our CNN model achieves an average F-score of 0.9917 and an area of curve up to 0.9991 for classification of lung parenchyma and non-lung-parenchyma. The obtain model can segment the lung parenchyma accurately for 201 subjects with heterogeneous lung diseases and CT scanners. The overlap ratio between the manual segmentation and the one by our method reaches 0.96. Conclusions The results demonstrated that the proposed clustering algorithm based method can generate the training dataset for CNN models. The obtained CNN model can segment lung parenchyma with very satisfactory performance and have the potential to locate and analyze lung lesions.
Collapse
Affiliation(s)
- Mingjie Xu
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Shouliang Qi
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China. .,Key Laboratory of Medical Image Computing of Northeastern University (Ministry of Education), Shenyang, China.
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, No. 36 Sanhao Street, Shenyang, 110004, China
| | - Yueyang Teng
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Lisheng Xu
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China
| | - Yudong Yao
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China.,Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ, 07030, USA
| | - Wei Qian
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, No. 195 Chuangxin Avenue, Hunnan District, Shenyang, 110169, China.,College of Engineering, University of Texas at El Paso, 500 W University, El Paso, TX, 79902, USA
| |
Collapse
|
33
|
Dai Y, Yan S, Zheng B, Song C. Incorporating automatically learned pulmonary nodule attributes into a convolutional neural network to improve accuracy of benign-malignant nodule classification. Phys Med Biol 2018; 63:245004. [PMID: 30524071 DOI: 10.1088/1361-6560/aaf09f] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Existing deep-learning-based pulmonary nodule classification models usually use images and benign-malignant labels as inputs for training. Image attributes of the nodules, as human-nameable high-level semantic labels, are rarely used to build a convolutional neural network (CNN). In this paper, a new method is proposed to combine the advantages of two classifications, which are pulmonary nodule benign-malignant classification and pulmonary nodule image attributes classification, into a deep learning network to improve the accuracy of pulmonary nodule classification. For this purpose, a unique 3D CNN is built to learn image attribute and benign-malignant classification simultaneously. A novel loss function is designed to balance the influence of two different kinds of classifications. The CNN is trained by a publicly available lung image database consortium (LIDC) dataset and is tested by a cross-validation method to predict the risk of a pulmonary nodule being malignant. This proposed method generates the accuracy of 91.47%, which is better than many existing models. Experimental findings show that if the CNN is built properly, the nodule attributes classification and benign-malignant classification can benefit from each other. By using nodule attribute learning as a control factor in a deep learning scheme, the accuracy of pulmonary nodule classification can be significantly improved by using a deep learning scheme.
Collapse
Affiliation(s)
- Yaojun Dai
- School of Medical Instrument and Food Engineering, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | | | | | | |
Collapse
|
34
|
Tiwari S. An Analysis in Tissue Classification for Colorectal Cancer Histology Using Convolution Neural Network and Colour Models. INTERNATIONAL JOURNAL OF INFORMATION SYSTEM MODELING AND DESIGN 2018. [DOI: 10.4018/ijismd.2018100101] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Computer vision-based identification of different tissue categories in histological images is a critical application of the computer-assisted diagnosis (CAD). Computer-assisted diagnosis systems support to reduce the cost and increase the efficiency of this process. Traditional image classification approaches depend on feature extraction methods designed for a specific problem based on domain information. Deep learning approaches are becoming important alternatives with advance of machine learning technologies to overcome the numerous difficulties of the feature-based approaches. A method for the classification of histological images of human colorectal cancer containing seven different types of tissue using convolutional neural network (CNN) is proposed in this article. The method is evaluated using four different colour models in absence and presence of Gaussian noise. The highest classification accuracies are achieved with HVI colour model, which is 95.8% in nonexistence and 78.5% in existence of noise respectively.
Collapse
|
35
|
Abstract
Lung cancer mortality is currently the highest among all kinds of fatal cancers. With the help of computer-aided detection systems, a timely detection of malignant pulmonary nodule at early stage could improve the patient survival rate efficiently. However, the sizes of the pulmonary nodules are usually various, and it is more difficult to detect small diameter nodules. The traditional convolution neural network uses pooling layers to reduce the resolution progressively, but it hampers the network’s ability to capture the tiny but vital features of the pulmonary nodules. To tackle this problem, we propose a novel 3D spatial pyramid dilated convolution network to classify the malignancy of the pulmonary nodules. Instead of using the pooling layers, we use 3D dilated convolution to learn the detailed characteristic information of the pulmonary nodules. Furthermore, we show that the fusion of multiple receptive fields from different dilated convolutions could further improve the classification performance of the model. Extensive experimental results demonstrate that our model achieves a better result with an accuracy of 88 . 6 % , which outperforms other state-of-the- art methods.
Collapse
|
36
|
Han G, Liu X, Zheng G, Wang M, Huang S. Automatic recognition of 3D GGO CT imaging signs through the fusion of hybrid resampling and layer-wise fine-tuning CNNs. Med Biol Eng Comput 2018; 56:2201-2212. [DOI: 10.1007/s11517-018-1850-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2017] [Accepted: 05/18/2018] [Indexed: 10/14/2022]
|
37
|
Joyseeree R, Müller H, Depeursinge A. Rotation-covariant tissue analysis for interstitial lung diseases using learned steerable filters: Performance evaluation and relevance for diagnostic aid. Comput Med Imaging Graph 2018; 64:1-11. [DOI: 10.1016/j.compmedimag.2018.01.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Revised: 12/19/2017] [Accepted: 01/09/2018] [Indexed: 11/30/2022]
|
38
|
Peng T, Wang Y, Xu TC, Shi L, Jiang J, Zhu S. Detection of Lung Contour with Closed Principal Curve and Machine Learning. J Digit Imaging 2018; 31:520-533. [PMID: 29450843 DOI: 10.1007/s10278-018-0058-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
Radiation therapy plays an essential role in the treatment of cancer. In radiation therapy, the ideal radiation doses are delivered to the observed tumor while not affecting neighboring normal tissues. In three-dimensional computed tomography (3D-CT) scans, the contours of tumors and organs-at-risk (OARs) are often manually delineated by radiologists. The task is complicated and time-consuming, and the manually delineated results will be variable from different radiologists. We propose a semi-supervised contour detection algorithm, which firstly uses a few points of region of interest (ROI) as an approximate initialization. Then the data sequences are achieved by the closed polygonal line (CPL) algorithm, where the data sequences consist of the ordered projection indexes and the corresponding initial points. Finally, the smooth lung contour can be obtained, when the data sequences are trained by the backpropagation neural network model (BNNM). We use the private clinical dataset and the public Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset to measure the accuracy of the presented method, respectively. To the private dataset, experimental results on the initial points which are as low as 15% of the manually delineated points show that the Dice coefficient reaches up to 0.95 and the global error is as low as 1.47 × 10-2. The performance of the proposed algorithm is also better than the cubic spline interpolation (CSI) algorithm. While on the public LIDC-IDRI dataset, our method achieves superior segmentation performance with average Dice of 0.83.
Collapse
Affiliation(s)
- Tao Peng
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China.
| | - Yihuai Wang
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China.
| | - Thomas Canhao Xu
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| | - Lianmin Shi
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| | - Jianwu Jiang
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| | - Shilang Zhu
- School of Computer Science & Technology, Soochow University, No.1 Shizi Road, Suzhou, Jiangsu, 215006, China
| |
Collapse
|
39
|
Shah MI, Mishra S, Yadav VK, Chauhan A, Sarkar M, Sharma SK, Rout C. Ziehl-Neelsen sputum smear microscopy image database: a resource to facilitate automated bacilli detection for tuberculosis diagnosis. J Med Imaging (Bellingham) 2017; 4:027503. [PMID: 28680911 DOI: 10.1117/1.jmi.4.2.027503] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2017] [Accepted: 06/14/2017] [Indexed: 11/14/2022] Open
Abstract
Ziehl-Neelsen stained microscopy is a crucial bacteriological test for tuberculosis detection, but its sensitivity is poor. According to the World Health Organization (WHO) recommendation, 300 viewfields should be analyzed to augment sensitivity, but only a few viewfields are examined due to patient load. Therefore, tuberculosis diagnosis through automated capture of the focused image (autofocusing), stitching of viewfields to form mosaics (autostitching), and automatic bacilli segmentation (grading) can significantly improve the sensitivity. However, the lack of unified datasets impedes the development of robust algorithms in these three domains. Therefore, the Ziehl-Neelsen sputum smear microscopy image database (ZNSM iDB) has been developed, and is freely available. This database contains seven categories of diverse datasets acquired from three different bright-field microscopes. Datasets related to autofocusing, autostitching, and manually segmenting bacilli can be used for developing algorithms, whereas the other four datasets are provided to streamline the sensitivity and specificity. All three categories of datasets were validated using different automated algorithms. As images available in this database have distinctive presentations with high noise and artifacts, this referral resource can also be used for the validation of robust detection algorithms. The ZNSM-iDB also assists for the development of methods in automated microscopy.
Collapse
Affiliation(s)
- Mohammad Imran Shah
- Jaypee University of Information Technology, Department of Biotechnology and Bioinformatics, Waknaghat, Himachal Pradesh, India
| | - Smriti Mishra
- Jaypee University of Information Technology, Department of Biotechnology and Bioinformatics, Waknaghat, Himachal Pradesh, India
| | - Vinod Kumar Yadav
- Jaypee University of Information Technology, Department of Biotechnology and Bioinformatics, Waknaghat, Himachal Pradesh, India
| | - Arun Chauhan
- Jaypee University of Information Technology, Department of Biotechnology and Bioinformatics, Waknaghat, Himachal Pradesh, India
| | - Malay Sarkar
- Indira Gandhi Medical College, Department of Pulmonary Medicine, Shimla, India
| | | | - Chittaranjan Rout
- Jaypee University of Information Technology, Department of Biotechnology and Bioinformatics, Waknaghat, Himachal Pradesh, India
| |
Collapse
|
40
|
Wang Q, Zheng Y, Yang G, Jin W, Chen X, Yin Y. Multiscale Rotation-Invariant Convolutional Neural Networks for Lung Texture Classification. IEEE J Biomed Health Inform 2017; 22:184-195. [PMID: 28333649 DOI: 10.1109/jbhi.2017.2685586] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We propose a new multiscale rotation-invariant convolutional neural network (MRCNN) model for classifying various lung tissue types on high-resolution computed tomography. MRCNN employs Gabor-local binary pattern that introduces a good property in image analysis-invariance to image scales and rotations. In addition, we offer an approach to deal with the problems caused by imbalanced number of samples between different classes in most of the existing works, accomplished by changing the overlapping size between the adjacent patches. Experimental results on a public interstitial lung disease database show a superior performance of the proposed method to state of the art.
Collapse
|
41
|
Song Y, Li Q, Zhang F, Huang H, Feng D, Wang Y, Chen M, Cai W. Dual discriminative local coding for tissue aging analysis. Med Image Anal 2017; 38:65-76. [PMID: 28282641 DOI: 10.1016/j.media.2016.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2016] [Revised: 07/12/2016] [Accepted: 10/05/2016] [Indexed: 11/26/2022]
Abstract
In aging research, morphological age of tissue helps to characterize the effects of aging on different individuals. While currently manual evaluations are used to estimate morphological ages under microscopy, such operation is difficult and subjective due to the complex visual characteristics of tissue images. In this paper, we propose an automated method to quantify morphological ages of tissues from microscopy images. We design a new sparse representation method, namely dual discriminative local coding (DDLC), that classifies the tissue images into different chronological ages. DDLC in- corporates discriminative distance learning and dual-level local coding into the basis model of locality-constrained linear coding thus achieves higher discriminative capability. The morphological age is then computed based on the classification scores. We conducted our study using the publicly avail- able terminal bulb aging database that has been commonly used in existing microscopy imaging research. To represent these images, we also design a highly descriptive descriptor that combines several complementary texture features extracted at two scales. Experimental results show that our method achieves significant improvement in age classification when compared to the existing approaches and other popular classifiers. We also present promising results in quantification of morphological ages.
Collapse
Affiliation(s)
- Yang Song
- School of Information Technologies, University of Sydney, Australia.
| | - Qing Li
- School of Information Technologies, University of Sydney, Australia
| | - Fan Zhang
- School of Information Technologies, University of Sydney, Australia
| | - Heng Huang
- Department of Computer Science and Engineering, University of Texas at Arlington, USA
| | - Dagan Feng
- School of Information Technologies, University of Sydney, Australia; Med-X Research Institute, Shanghai Jiaotong University, China
| | - Yue Wang
- Bradley Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, USA
| | - Mei Chen
- Computer Engineering Department, University of Albany State University of New York, USA; Robotics Institute, Carnegie Mellon University, USA
| | - Weidong Cai
- School of Information Technologies, University of Sydney, Australia
| |
Collapse
|
42
|
Pang S, Yu Z, Orgun MA. A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 140:283-293. [PMID: 28254085 DOI: 10.1016/j.cmpb.2016.12.019] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2016] [Accepted: 12/31/2016] [Indexed: 05/10/2023]
Abstract
BACKGROUND AND OBJECTIVES Highly accurate classification of biomedical images is an essential task in the clinical diagnosis of numerous medical diseases identified from those images. Traditional image classification methods combined with hand-crafted image feature descriptors and various classifiers are not able to effectively improve the accuracy rate and meet the high requirements of classification of biomedical images. The same also holds true for artificial neural network models directly trained with limited biomedical images used as training data or directly used as a black box to extract the deep features based on another distant dataset. In this study, we propose a highly reliable and accurate end-to-end classifier for all kinds of biomedical images via deep learning and transfer learning. METHODS We first apply domain transferred deep convolutional neural network for building a deep model; and then develop an overall deep learning architecture based on the raw pixels of original biomedical images using supervised training. In our model, we do not need the manual design of the feature space, seek an effective feature vector classifier or segment specific detection object and image patches, which are the main technological difficulties in the adoption of traditional image classification methods. Moreover, we do not need to be concerned with whether there are large training sets of annotated biomedical images, affordable parallel computing resources featuring GPUs or long times to wait for training a perfect deep model, which are the main problems to train deep neural networks for biomedical image classification as observed in recent works. RESULTS With the utilization of a simple data augmentation method and fast convergence speed, our algorithm can achieve the best accuracy rate and outstanding classification ability for biomedical images. We have evaluated our classifier on several well-known public biomedical datasets and compared it with several state-of-the-art approaches. CONCLUSIONS We propose a robust automated end-to-end classifier for biomedical images based on a domain transferred deep convolutional neural network model that shows a highly reliable and accurate performance which has been confirmed on several public biomedical image datasets.
Collapse
Affiliation(s)
- Shuchao Pang
- College of Computer Science and Technology, Jilin University, Qianjin Street: 2699, Jilin Province, China; Department of Computing, Macquarie University, Sydney, NSW 2109, Australia.
| | - Zhezhou Yu
- College of Computer Science and Technology, Jilin University, Qianjin Street: 2699, Jilin Province, China.
| | - Mehmet A Orgun
- Department of Computing, Macquarie University, Sydney, NSW 2109, Australia; Faculty of Information Technology, Macau University of Science and Technology, Taipa, Macau.
| |
Collapse
|
43
|
Štajduhar I, Mamula M, Miletić D, Ünal G. Semi-automated detection of anterior cruciate ligament injury from MRI. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 140:151-164. [PMID: 28254071 DOI: 10.1016/j.cmpb.2016.12.006] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Revised: 10/28/2016] [Accepted: 12/12/2016] [Indexed: 06/06/2023]
Abstract
BACKGROUND AND OBJECTIVES A radiologist's work in detecting various injuries or pathologies from radiological scans can be tiresome, time consuming and prone to errors. The field of computer-aided diagnosis aims to reduce these factors by introducing a level of automation in the process. In this paper, we deal with the problem of detecting the presence of anterior cruciate ligament (ACL) injury in a human knee. We examine the possibility of aiding the diagnosis process by building a decision-support model for detecting the presence of milder ACL injuries (not requiring operative treatment) and complete ACL ruptures (requiring operative treatment) from sagittal plane magnetic resonance (MR) volumes of human knees. METHODS Histogram of oriented gradient (HOG) descriptors and gist descriptors are extracted from manually selected rectangular regions of interest enveloping the wider cruciate ligament area. Performance of two machine-learning models is explored, coupled with both feature extraction methods: support vector machine (SVM) and random forests model. Model generalisation properties were determined by performing multiple iterations of stratified 10-fold cross validation whilst observing the area under the curve (AUC) score. RESULTS Sagittal plane knee joint MR data was retrospectively gathered at the Clinical Hospital Centre Rijeka, Croatia, from 2007 until 2014. Type of ACL injury was established in a double-blind fashion by comparing the retrospectively set diagnosis against the prospective opinion of another radiologist. After clean up, the resulting dataset consisted of 917 usable labelled exam sequences of left or right knees. Experimental results suggest that a linear-kernel SVM learned from HOG descriptors has the best generalisation properties among the experimental models compared, having an area under the curve of 0.894 for the injury-detection problem and 0.943 for the complete-rupture-detection problem. CONCLUSIONS Although the problem of performing semi-automated ACL-injury diagnosis by observing knee-joint MR volumes alone is a difficult one, experimental results suggest potential clinical application of computer-aided decision making, both for detecting milder injuries and detecting complete ruptures.
Collapse
Affiliation(s)
- Ivan Štajduhar
- Faculty of Engineering, University of Rijeka, Vukovarska 58, Rijeka, Croatia; Faculty of Engineering and Natural Sciences, Sabanci University, Üniversite Cd. No:27, Tuzla, Istanbul, Turkey.
| | - Mihaela Mamula
- Clinical Hospital Centre Rijeka, University of Rijeka, Krešimirova 42, Rijeka, Croatia
| | - Damir Miletić
- Clinical Hospital Centre Rijeka, University of Rijeka, Krešimirova 42, Rijeka, Croatia
| | - Gözde Ünal
- Istanbul Technical University, Department of Computer Engineering, Maslak, Sarıyer, Istanbul, Turkey
| |
Collapse
|
44
|
Three Aspects on Using Convolutional Neural Networks for Computer-Aided Detection in Medical Imaging. DEEP LEARNING AND CONVOLUTIONAL NEURAL NETWORKS FOR MEDICAL IMAGE COMPUTING 2017. [DOI: 10.1007/978-3-319-42999-1_8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
45
|
Ma L, Liu X, Fei B. Learning with distribution of optimized features for recognizing common CT imaging signs of lung diseases. Phys Med Biol 2016; 62:612-632. [PMID: 28033116 DOI: 10.1088/1361-6560/62/2/612] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023]
Abstract
Common CT imaging signs of lung diseases (CISLs) are defined as the imaging signs that frequently appear in lung CT images from patients. CISLs play important roles in the diagnosis of lung diseases. This paper proposes a novel learning method, namely learning with distribution of optimized feature (DOF), to effectively recognize the characteristics of CISLs. We improve the classification performance by learning the optimized features under different distributions. Specifically, we adopt the minimum spanning tree algorithm to capture the relationship between features and discriminant ability of features for selecting the most important features. To overcome the problem of various distributions in one CISL, we propose a hierarchical learning method. First, we use an unsupervised learning method to cluster samples into groups based on their distribution. Second, in each group, we use a supervised learning method to train a model based on their categories of CISLs. Finally, we obtain multiple classification decisions from multiple trained models and use majority voting to achieve the final decision. The proposed approach has been implemented on a set of 511 samples captured from human lung CT images and achieves a classification accuracy of 91.96%. The proposed DOF method is effective and can provide a useful tool for computer-aided diagnosis of lung diseases on CT images.
Collapse
Affiliation(s)
- Ling Ma
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, GA, USA. School of Computer Science, Beijing Institute of Technology, Beijing, People's Republic of China
| | | | | |
Collapse
|
46
|
Pulmonary Nodule Classification with Deep Convolutional Neural Networks on Computed Tomography Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2016; 2016:6215085. [PMID: 28070212 PMCID: PMC5192289 DOI: 10.1155/2016/6215085] [Citation(s) in RCA: 62] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2016] [Revised: 11/04/2016] [Accepted: 11/22/2016] [Indexed: 01/06/2023]
Abstract
Computer aided detection (CAD) systems can assist radiologists by offering a second opinion on early diagnosis of lung cancer. Classification and feature representation play critical roles in false-positive reduction (FPR) in lung nodule CAD. We design a deep convolutional neural networks method for nodule classification, which has an advantage of autolearning representation and strong generalization ability. A specified network structure for nodule images is proposed to solve the recognition of three types of nodules, that is, solid, semisolid, and ground glass opacity (GGO). Deep convolutional neural networks are trained by 62,492 regions-of-interest (ROIs) samples including 40,772 nodules and 21,720 nonnodules from the Lung Image Database Consortium (LIDC) database. Experimental results demonstrate the effectiveness of the proposed method in terms of sensitivity and overall accuracy and that it consistently outperforms the competing methods.
Collapse
|
47
|
Gao M, Bagci U, Lu L, Wu A, Buty M, Shin HC, Roth H, Papadakis GZ, Depeursinge A, Summers RM, Xu Z, Mollura DJ. Holistic classification of CT attenuation patterns for interstitial lung diseases via deep convolutional neural networks. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION 2016; 6:1-6. [PMID: 29623248 DOI: 10.1080/21681163.2015.1124249] [Citation(s) in RCA: 84] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Interstitial lung diseases (ILD) involve several abnormal imaging patterns observed in computed tomography (CT) images. Accurate classification of these patterns plays a significant role in precise clinical decision making of the extent and nature of the diseases. Therefore, it is important for developing automated pulmonary computer-aided detection systems. Conventionally, this task relies on experts' manual identification of regions of interest (ROIs) as a prerequisite to diagnose potential diseases. This protocol is time consuming and inhibits fully automatic assessment. In this paper, we present a new method to classify ILD imaging patterns on CT images. The main difference is that the proposed algorithm uses the entire image as a holistic input. By circumventing the prerequisite of manual input ROIs, our problem set-up is significantly more difficult than previous work but can better address the clinical workflow. Qualitative and quantitative results using a publicly available ILD database demonstrate state-of-the-art classification accuracy under the patch-based classification and shows the potential of predicting the ILD type using holistic image.
Collapse
Affiliation(s)
- Mingchen Gao
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Ulas Bagci
- Center for Research in Computer Vision, University of Central Florida (UCF), Orlando, FL, USA
| | - Le Lu
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Aaron Wu
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Mario Buty
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Hoo-Chang Shin
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Holger Roth
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Georgios Z Papadakis
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Adrien Depeursinge
- Institute of Information Systems, University of Applied Sciences Western Switzerland (HES-SO), Sierre, Switzerland
| | - Ronald M Summers
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Ziyue Xu
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| | - Daniel J Mollura
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health (NIH), Bethesda, MD, USA
| |
Collapse
|
48
|
Anthimopoulos M, Christodoulidis S, Ebner L, Christe A, Mougiakakou S. Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1207-1216. [PMID: 26955021 DOI: 10.1109/tmi.2016.2535865] [Citation(s) in RCA: 479] [Impact Index Per Article: 53.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Automated tissue characterization is one of the most crucial components of a computer aided diagnosis (CAD) system for interstitial lung diseases (ILDs). Although much research has been conducted in this field, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as medical image analysis. In this paper, we propose and evaluate a convolutional neural network (CNN), designed for the classification of ILD patterns. The proposed network consists of 5 convolutional layers with 2 × 2 kernels and LeakyReLU activations, followed by average pooling with size equal to the size of the final feature maps and three dense layers. The last dense layer has 7 outputs, equivalent to the classes considered: healthy, ground glass opacity (GGO), micronodules, consolidation, reticulation, honeycombing and a combination of GGO/reticulation. To train and evaluate the CNN, we used a dataset of 14696 image patches, derived by 120 CT scans from different scanners and hospitals. To the best of our knowledge, this is the first deep CNN designed for the specific problem. A comparative analysis proved the effectiveness of the proposed CNN against previous methods in a challenging dataset. The classification performance ( ~ 85.5%) demonstrated the potential of CNNs in analyzing lung patterns. Future work includes, extending the CNN to three-dimensional data provided by CT volume scans and integrating the proposed method into a CAD system that aims to provide differential diagnosis for ILDs as a supportive tool for radiologists.
Collapse
|
49
|
van Tulder G, de Bruijne M. Combining Generative and Discriminative Representation Learning for Lung CT Analysis With Convolutional Restricted Boltzmann Machines. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1262-1272. [PMID: 26886968 DOI: 10.1109/tmi.2016.2526687] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
The choice of features greatly influences the performance of a tissue classification system. Despite this, many systems are built with standard, predefined filter banks that are not optimized for that particular application. Representation learning methods such as restricted Boltzmann machines may outperform these standard filter banks because they learn a feature description directly from the training data. Like many other representation learning methods, restricted Boltzmann machines are unsupervised and are trained with a generative learning objective; this allows them to learn representations from unlabeled data, but does not necessarily produce features that are optimal for classification. In this paper we propose the convolutional classification restricted Boltzmann machine, which combines a generative and a discriminative learning objective. This allows it to learn filters that are good both for describing the training data and for classification. We present experiments with feature learning for lung texture classification and airway detection in CT images. In both applications, a combination of learning objectives outperformed purely discriminative or generative learning, increasing, for instance, the lung tissue classification accuracy by 1 to 8 percentage points. This shows that discriminative learning can help an otherwise unsupervised feature learner to learn filters that are optimized for classification.
Collapse
|
50
|
Shin HC, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1285-98. [PMID: 26886976 PMCID: PMC4890616 DOI: 10.1109/tmi.2016.2528162] [Citation(s) in RCA: 1894] [Impact Index Per Article: 210.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Revised: 02/04/2016] [Accepted: 02/05/2016] [Indexed: 05/17/2023]
Abstract
Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.
Collapse
Affiliation(s)
- Hoo-Chang Shin
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory
| | - Holger R. Roth
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory
| | | | - Le Lu
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory
- National Institutes of Health Clinical CenterClinical Image Processing ServiceRadiology and Imaging Sciences DepartmentBethesdaMD20892-1182USA
| | - Ziyue Xu
- Center for Infectious Disease Imaging
| | | | - Jianhua Yao
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory
- National Institutes of Health Clinical CenterClinical Image Processing ServiceRadiology and Imaging Sciences DepartmentBethesdaMD20892-1182USA
| | | | - Ronald M. Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory
- National Institutes of Health Clinical CenterClinical Image Processing ServiceRadiology and Imaging Sciences DepartmentBethesdaMD20892-1182USA
| |
Collapse
|