151
|
Zhang H, Zhang H. LungSeek: 3D Selective Kernel residual network for pulmonary nodule diagnosis. THE VISUAL COMPUTER 2022; 39:679-692. [PMID: 35103029 PMCID: PMC8792456 DOI: 10.1007/s00371-021-02366-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 11/17/2021] [Indexed: 06/14/2023]
Abstract
Early detection and diagnosis of pulmonary nodules is the most promising way to improve the survival chances of lung cancer patients. This paper proposes an automatic pulmonary cancer diagnosis system, LungSeek. LungSeek is mainly divided into two modules: (1) Nodule detection, which detects all suspicious nodules from computed tomography (CT) scan; (2) Nodule Classification, classifies nodules as benign or malignant. Specifically, a 3D Selective Kernel residual network (SK-ResNet) based on the Selective Kernel Network and 3D residual network is located. A deep 3D region proposal network with SK-ResNet is designed for detection of pulmonary nodules while a multi-scale feature fusion network is designed for the nodule classification. Both networks use the SK-Net module to obtain different receptive field information, thereby effectively learning nodule features and improving diagnostic performance. Our method has been verified on the luna16 data set, reaching 89.06, 94.53% and 97.72% when the average number of false positives is 1, 2 and 4, respectively. Meanwhile, its performance is better than the state-of-the-art method and other similar networks and experienced doctors. This method has the ability to adaptively adjust the receptive field according to multiple scales of the input information, so as to better detect nodules of various sizes. The framework of LungSeek based on 3D SK-ResNet is proposed for nodule detection and nodule classification from chest CT. Our experimental results demonstrate the effectiveness of the proposed method in the diagnosis of pulmonary nodules.
Collapse
Affiliation(s)
- Haowan Zhang
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430081 China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan, China
| | - Hong Zhang
- College of Computer Science and Technology, Wuhan University of Science and Technology, Wuhan, 430081 China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan, China
| |
Collapse
|
152
|
Li J, Wu J, Zhao Z, Zhang Q, Shao J, Wang C, Qiu Z, Li W. Artificial intelligence-assisted decision making for prognosis and drug efficacy prediction in lung cancer patients: a narrative review. J Thorac Dis 2022; 13:7021-7033. [PMID: 35070384 PMCID: PMC8743400 DOI: 10.21037/jtd-21-864] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2021] [Accepted: 08/30/2021] [Indexed: 02/05/2023]
Abstract
Objective In this review, we aim to present frontier studies in patients with lung cancer as it related to artificial intelligence (AI)-assisted decision-making and summarize the latest advances, challenges and future trend in this field. Background Despite increasing survival rate in cancer patients over the last decades, lung cancer remains one of the leading causes of death worldwide. The early diagnosis, accurate evaluation and individualized treatment are vital approaches to improve the survival rate of patients with lung cancer. Thus, decision making based on these approaches requires accuracy and efficiency beyond manpower. Recent advances in AI and precision medicine have provided a fertile environment for the development of AI-based models. These models have the potential to assist radiologists and oncologists in detecting lung cancer, predicting prognosis and developing personalized treatment plans for better outcomes of the patients. Methods We searched literature from 2000 through July 31th, 2021 in Medline/PubMed, the Web of Science, the Cochrane Library, ACM Digital Library, INSPEC and EMBASE. Key words such as “artificial intelligence”, “AI”, “deep learning”, “lung cancer”, “NSCLC”, “SCLC” were combined to identify related literatures. These literatures were then selected by two independent authors. Articles chosen by only one author will be examined by another author to determine whether this article was relative and valuable. The selected literatures were read by all authors and discussed to draw reliable conclusions. Conclusions AI, especially for those based on deep learning and radiomics, is capable of assisting clinical decision making from many aspects, for its quantitatively interpretation of patients’ information and its potential to deal with the dynamics, individual differences and heterogeneity of lung cancer. Hopefully, remaining problems such as insufficient data and poor interpretability may be solved to put AI-based models into clinical practice.
Collapse
Affiliation(s)
- Jingwei Li
- Department of Respiratory and Critical Care Medicine, West China Medical School/West China Hospital, Sichuan University, Chengdu, China.,West China Medical School/West China Hospital, Sichuan University, Chengdu, China
| | - Jiayang Wu
- West China School of Public Health/West China Fourth Hospital, Sichuan University, Chengdu, China
| | - Zhehao Zhao
- West China Medical School/West China Hospital, Sichuan University, Chengdu, China
| | - Qiran Zhang
- West China Medical School/West China Hospital, Sichuan University, Chengdu, China
| | - Jun Shao
- Department of Respiratory and Critical Care Medicine, West China Medical School/West China Hospital, Sichuan University, Chengdu, China
| | - Chengdi Wang
- Department of Respiratory and Critical Care Medicine, West China Medical School/West China Hospital, Sichuan University, Chengdu, China
| | - Zhixin Qiu
- Department of Respiratory and Critical Care Medicine, West China Medical School/West China Hospital, Sichuan University, Chengdu, China
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, West China Medical School/West China Hospital, Sichuan University, Chengdu, China
| |
Collapse
|
153
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
154
|
AFA: adversarial frequency alignment for domain generalized lung nodule detection. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-06928-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
155
|
Terasaki Y, Yokota H, Tashiro K, Maejima T, Takeuchi T, Kurosawa R, Yamauchi S, Takada A, Mukai H, Ohira K, Ota J, Horikoshi T, Mori Y, Uno T, Suyari H. Multidimensional Deep Learning Reduces False-Positives in the Automated Detection of Cerebral Aneurysms on Time-Of-Flight Magnetic Resonance Angiography: A Multi-Center Study. Front Neurol 2022; 12:742126. [PMID: 35115991 PMCID: PMC8805516 DOI: 10.3389/fneur.2021.742126] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 12/07/2021] [Indexed: 11/16/2022] Open
Abstract
Current deep learning-based cerebral aneurysm detection demonstrates high sensitivity, but produces numerous false-positives (FPs), which hampers clinical application of automated detection systems for time-of-flight magnetic resonance angiography. To reduce FPs while maintaining high sensitivity, we developed a multidimensional convolutional neural network (MD-CNN) designed to unite planar and stereoscopic information about aneurysms. This retrospective study enrolled time-of-flight magnetic resonance angiography images of cerebral aneurysms from three institutions from June 2006 to April 2019. In the internal test, 80% of the entire data set was used for model training and 20% for the test, while for the external tests, data from different pairs of the three institutions were used for training and the remaining one for testing. Images containing aneurysms > 15 mm and images without aneurysms were excluded. Three deep learning models [planar information-only (2D-CNN), stereoscopic information-only (3D-CNN), and multidimensional information (MD-CNN)] were trained to classify whether the voxels contained aneurysms, and they were evaluated on each test. The performance of each model was assessed using free-response operating characteristic curves. In total, 732 aneurysms (5.9 ± 2.5 mm) of 559 cases (327, 120, and 112 from institutes A, B, and C; 469 and 263 for 1.5T and 3.0T MRI) were included in this study. In the internal test, the highest sensitivities were 80.4, 87.4, and 82.5%, and the FPs were 6.1, 7.1, and 5.0 FPs/case at a fixed sensitivity of 80% for the 2D-CNN, 3D-CNN, and MD-CNN, respectively. In the external test, the highest sensitivities were 82.1, 86.5, and 89.1%, and 5.9, 7.4, and 4.2 FPs/cases for them, respectively. MD-CNN was a new approach to maintain sensitivity and reduce the FPs simultaneously.
Collapse
Affiliation(s)
- Yuki Terasaki
- Graduate School of Science and Engineering, Chiba University, Chiba, Japan
- Department of EC Platform, ZOZO Technologies, Inc., Tokyo, Japan
| | - Hajime Yokota
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba, Japan
- *Correspondence: Hajime Yokota
| | - Kohei Tashiro
- Graduate School of Science and Engineering, Chiba University, Chiba, Japan
- Kohei Tashiro
| | - Takuma Maejima
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Takashi Takeuchi
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Ryuna Kurosawa
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Shoma Yamauchi
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Akiyo Takada
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Hiroki Mukai
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Kenji Ohira
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Joji Ota
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Takuro Horikoshi
- Department of Radiology, Chiba University Hospital, Chiba, Japan
| | - Yasukuni Mori
- Graduate School of Engineering, Chiba University, Chiba, Japan
| | - Takashi Uno
- Department of Diagnostic Radiology and Radiation Oncology, Graduate School of Medicine, Chiba University, Chiba, Japan
| | - Hiroki Suyari
- Graduate School of Engineering, Chiba University, Chiba, Japan
| |
Collapse
|
156
|
Lin FY, Chang YC, Huang HY, Li CC, Chen YC, Chen CM. A radiomics approach for lung nodule detection in thoracic CT images based on the dynamic patterns of morphological variation. Eur Radiol 2022; 32:3767-3777. [PMID: 35020016 DOI: 10.1007/s00330-021-08456-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2021] [Revised: 09/20/2021] [Accepted: 11/02/2021] [Indexed: 11/28/2022]
Abstract
OBJECTIVES To propose and evaluate a set of radiomic features, called morphological dynamics features, for pulmonary nodule detection, which were rooted in the dynamic patterns of morphological variation and needless precise lesion segmentation. MATERIALS AND METHODS Two datasets were involved, namely, university hospital (UH) and LIDC datasets, comprising 72 CT scans (360 nodules) and 888 CT scans (2230 nodules), respectively. Each nodule was annotated by multiple radiologists. Denoted the category of nodules identified by at least k radiologists as ALk. A nodule detection algorithm, called CAD-MD algorithm, was proposed based on the morphological dynamics radiomic features, characterizing a lesion by ten sets of the same features with different values extracted from ten different thresholding results. Each nodule candidate was classified by a two-level classifier, including ten decision trees and a random forest, respectively. The CAD-MD algorithm was compared with a deep learning approach, the N-Net, using the UH dataset. RESULTS On the AL1 and AL2 of the UH dataset, the AUC of the AFROC curves were 0.777 and 0.851 for the CAD-MD algorithm and 0.478 and 0.472 for the N-Net, respectively. The CAD-MD algorithm achieved the sensitivities of 84.4% and 91.4% with 2.98 and 3.69 FPs/scan and the N-Net 74.4% and 80.7% with 3.90 and 4.49 FPs/scan, respectively. On the LIDC dataset, the CAD-MD algorithm attained the sensitivities of 87.6%, 89.2%, 92.2%, and 95.0% with 4 FPs/scan for AL1-AL4, respectively. CONCLUSION The morphological dynamics radiomic features might serve as an effective set of radiomic features for lung nodule detection. KEY POINTS • Texture features varied with such CT system settings as reconstruction kernels of CT images, CT scanner models, and parameter settings, and so on. • Shape and first-order statistics were shown to be the most robust features against variation in CT imaging parameters. • The morphological dynamics radiomic features, which mainly characterized the dynamic patterns of morphological variation, were shown to be effective for lung nodule detection.
Collapse
Affiliation(s)
- Fan-Ya Lin
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan
| | - Yeun-Chung Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | | | - Chia-Chen Li
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan
| | - Yi-Chang Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan.,Department of Medical Imaging, Cardinal Tien Hospital, New Taipei City, Taiwan
| | - Chung-Ming Chen
- Department of Biomedical Engineering, College of Medicine and College of Engineering, National Taiwan University, No. 1, Sec. 1, Jen-Ai Road, Taipei, 100, Taiwan.
| |
Collapse
|
157
|
Cloud-Based Lung Tumor Detection and Stage Classification Using Deep Learning Techniques. BIOMED RESEARCH INTERNATIONAL 2022; 2022:4185835. [PMID: 35047635 PMCID: PMC8763490 DOI: 10.1155/2022/4185835] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Revised: 11/30/2021] [Accepted: 12/07/2021] [Indexed: 02/01/2023]
Abstract
Artificial intelligence (AI), Internet of Things (IoT), and the cloud computing have recently become widely used in the healthcare sector, which aid in better decision-making for a radiologist. PET imaging or positron emission tomography is one of the most reliable approaches for a radiologist to diagnosing many cancers, including lung tumor. In this work, we proposed stage classification of lung tumor which is a more challenging task in computer-aided diagnosis. As a result, a modified computer-aided diagnosis is being considered as a way to reduce the heavy workloads and second opinion to radiologists. In this paper, we present a strategy for classifying and validating different stages of lung tumor progression, as well as a deep neural model and data collection using cloud system for categorizing phases of pulmonary illness. The proposed system presents a Cloud-based Lung Tumor Detector and Stage Classifier (Cloud-LTDSC) as a hybrid technique for PET/CT images. The proposed Cloud-LTDSC initially developed the active contour model as lung tumor segmentation, and multilayer convolutional neural network (M-CNN) for classifying different stages of lung cancer has been modelled and validated with standard benchmark images. The performance of the presented technique is evaluated using a benchmark image LIDC-IDRI dataset of 50 low doses and also utilized the lung CT DICOM images. Compared with existing techniques in the literature, our proposed method achieved good result for the performance metrics accuracy, recall, and precision evaluated. Under numerous aspects, our proposed approach produces superior outcomes on all of the applied dataset images. Furthermore, the experimental result achieves an average lung tumor stage classification accuracy of 97%-99.1% and an average of 98.6% which is significantly higher than the other existing techniques.
Collapse
|
158
|
Lin C, Zheng Y, Xiao X, Lin J. CXR-RefineDet: Single-Shot Refinement Neural Network for Chest X-Ray Radiograph Based on Multiple Lesions Detection. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4182191. [PMID: 35035832 PMCID: PMC8759881 DOI: 10.1155/2022/4182191] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Revised: 12/03/2021] [Accepted: 12/06/2021] [Indexed: 01/25/2023]
Abstract
The workload of radiologists has dramatically increased in the context of the COVID-19 pandemic, causing misdiagnosis and missed diagnosis of diseases. The use of artificial intelligence technology can assist doctors in locating and identifying lesions in medical images. In order to improve the accuracy of disease diagnosis in medical imaging, we propose a lung disease detection neural network that is superior to the current mainstream object detection model in this paper. By combining the advantages of RepVGG block and Resblock in information fusion and information extraction, we design a backbone RRNet with few parameters and strong feature extraction capabilities. After that, we propose a structure called Information Reuse, which can solve the problem of low utilization of the original network output features by connecting the normalized features back to the network. Combining the network of RRNet and the improved RefineDet, we propose the overall network which was called CXR-RefineDet. Through a large number of experiments on the largest public lung chest radiograph detection dataset VinDr-CXR, it is found that the detection accuracy and inference speed of CXR-RefineDet have reached 0.1686 mAP and 6.8 fps, respectively, which is better than the two-stage object detection algorithm using a strong backbone like ResNet-50 and ResNet-101. In addition, the fast reasoning speed of CXR-RefineDet also provides the possibility for the actual implementation of the computer-aided diagnosis system.
Collapse
Affiliation(s)
- Cong Lin
- College of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang 524025, China
| | - Yongbin Zheng
- College of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang 524025, China
| | - Xiuchun Xiao
- College of Electronics and Information Engineering, Guangdong Ocean University, Zhanjiang 524025, China
| | - Jialun Lin
- College of Biomedical Information and Engineering, Hainan Medical University, Haikou 571199, China
| |
Collapse
|
159
|
Jiang W, Zeng G, Wang S, Wu X, Xu C. Application of Deep Learning in Lung Cancer Imaging Diagnosis. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6107940. [PMID: 35028122 PMCID: PMC8749371 DOI: 10.1155/2022/6107940] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 10/24/2021] [Accepted: 11/05/2021] [Indexed: 11/17/2022]
Abstract
Lung cancer is one of the malignant tumors with the highest fatality rate and nearest to our lives. It poses a great threat to human health and it mainly occurs in smokers. In our country, with the acceleration of industrialization, environmental pollution, and population aging, the cancer burden of lung cancer is increasing day by day. In the diagnosis of lung cancer, Computed Tomography (CT) images are a fairly common visualization tool. CT images visualize all tissues based on the absorption of X-rays. The diseased parts of the lung are collectively referred to as pulmonary nodules, the shape of nodules is different, and the risk of cancer will vary with the shape of nodules. Computer-aided diagnosis (CAD) is a very suitable method to solve this problem because the computer vision model can quickly scan every part of the CT image of the same quality for analysis and will not be affected by fatigue and emotion. The latest advances in deep learning enable computer vision models to help doctors diagnose various diseases, and in some cases, models have shown greater competitiveness than doctors. Based on the opportunity of technological development, the application of computer vision in medical imaging diagnosis of diseases has important research significance and value. In this paper, we have used a deep learning-based model on CT images of lung cancer and verified its effectiveness in the timely and accurate prediction of lungs disease. The proposed model has three parts: (i) detection of lung nodules, (ii) False Positive Reduction of the detected nodules to filter out "false nodules," and (iii) classification of benign and malignant lung nodules. Furthermore, different network structures and loss functions were designed and realized at different stages. Additionally, to fine-tune the proposed deep learning-based mode and improve its accuracy in the detection Lung Nodule Detection, Noudule-Net, which is a detection network structure that combines U-Net and RPN, is proposed. Experimental observations have verified that the proposed scheme has exceptionally improved the expected accuracy and precision ratio of the underlined disease.
Collapse
Affiliation(s)
- Wenfa Jiang
- Thoracic Surgery Department, GanZhou People's Hospital, Ganzhou 341000, China
| | - Ganhua Zeng
- Thoracic Surgery Department, GanZhou People's Hospital, Ganzhou 341000, China
| | - Shuo Wang
- Ward 1, Ganzhou Cancer Hospital, Ganzhou, Jiangxi 341500, China
| | - Xiaofeng Wu
- The Three Departments of Medicine, Dayu County Peoples Hospital, Ganzhou, Jiangxi 341500, China
| | - Chenyang Xu
- Thoracic Surgery Department, GanZhou People's Hospital, Ganzhou 341000, China
| |
Collapse
|
160
|
Cheng X, Wen H, You H, Hua L, Xiaohua W, Qiuting C, Jiabao L. Recognition of Peripheral Lung Cancer and Focal Pneumonia on Chest Computed Tomography Images Based on Convolutional Neural Network. Technol Cancer Res Treat 2022; 21:15330338221085375. [PMID: 35293240 PMCID: PMC8935416 DOI: 10.1177/15330338221085375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Introduction: Chest computed tomography (CT) is important for the early screening of lung diseases and clinical diagnosis, particularly during the COVID-19 pandemic. We propose a method for classifying peripheral lung cancer and focal pneumonia on chest CT images and undertake 5 window settings to study the effect on the artificial intelligence processing results. Methods: A retrospective collection of CT images from 357 patients with peripheral lung cancer having solitary solid nodule or focal pneumonia with a solitary consolidation was applied. We segmented and aligned the lung parenchyma based on some morphological methods and cropped this region of the lung parenchyma with the minimum 3D bounding box. Using these 3D cropped volumes of all cases, we designed a 3D neural network to classify them into 2 categories. We also compared the classification results of the 3 physicians with different experience levels on the same dataset. Results: We conducted experiments using 5 window settings. After cropping and alignment based on an automatic preprocessing procedure, our neural network achieved an average classification accuracy of 91.596% under a 5-fold cross-validation in the full window, in which the area under the curve (AUC) was 0.946. The classification accuracy and AUC value were 90.48% and 0.957 for the junior physician, 94.96% and 0.989 for the intermediate physician, and 96.92% and 0.980 for the senior physician, respectively. After removing the error prediction, the accuracy improved significantly, reaching 98.79% in the self-defined window2. Conclusion: Using the proposed neural network, in separating peripheral lung cancer and focal pneumonia in chest CT data, we achieved an accuracy competitive to that of a junior physician. Through a data ablation study, the proposed 3D CNN can achieve a slightly higher accuracy compared with senior physicians in the same subset. The self-defined window2 was the best for data training and evaluation.
Collapse
Affiliation(s)
- Xiaoyue Cheng
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - He Wen
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Hao You
- Key laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Li Hua
- Key laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
| | - Wu Xiaohua
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Cao Qiuting
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| | - Liu Jiabao
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
161
|
Hossain MB, Iqbal SMHS, Islam MM, Akhtar MN, Sarker IH. Transfer learning with fine-tuned deep CNN ResNet50 model for classifying COVID-19 from chest X-ray images. INFORMATICS IN MEDICINE UNLOCKED 2022; 30:100916. [PMID: 35342787 PMCID: PMC8933872 DOI: 10.1016/j.imu.2022.100916] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2022] [Revised: 03/07/2022] [Accepted: 03/09/2022] [Indexed: 12/13/2022] Open
Abstract
COVID-19 cases are putting pressure on healthcare systems all around the world. Due to the lack of available testing kits, it is impractical for screening every patient with a respiratory ailment using traditional methods (RT-PCR). In addition, the tests have a high turn-around time and low sensitivity. Detecting suspected COVID-19 infections from the chest X-ray might help isolate high-risk people before the RT-PCR test. Most healthcare systems already have X-ray equipment, and because most current X-ray systems have already been computerized, there is no need to transfer the samples. The use of a chest X-ray to prioritize the selection of patients for subsequent RT-PCR testing is the motivation of this work. Transfer learning (TL) with fine-tuning on deep convolutional neural network-based ResNet50 model has been proposed in this work to classify COVID-19 patients from the COVID-19 Radiography Database. Ten distinct pre-trained weights, trained on varieties of large-scale datasets using various approaches such as supervised learning, self-supervised learning, and others, have been utilized in this work. Our proposed i N a t 2021 _ M i n i _ S w A V _ 1 k model, pre-trained on the iNat2021 Mini dataset using the SwAV algorithm, outperforms the other ResNet50 TL models. For COVID instances in the two-class (Covid and Normal) classification, our work achieved 99.17% validation accuracy, 99.95% train accuracy, 99.31% precision, 99.03% sensitivity, and 99.17% F1-score. Some domain-adapted ( I m a g e N e t _ C h e s t X - r a y 14 ) and in-domain (ChexPert, ChestX-ray14) models looked promising in medical image classification by scoring significantly higher than other models.
Collapse
Affiliation(s)
- Md Belal Hossain
- Department of Computer Science and Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh
| | - S M Hasan Sazzad Iqbal
- Department of Computer Science and Engineering, Pabna University of Science and Technology, Pabna 6600, Bangladesh
| | - Md Monirul Islam
- Department of Textile Engineering, Uttara University, Dhaka 1230, Bangladesh
| | - Md Nasim Akhtar
- Department of Computer Science and Engineering, Dhaka University of Engineering Technology, Gazipur, 1707, Bangladesh
| | - Iqbal H Sarker
- Department of Computer Science and Engineering, Chittagong University of Engineering & Technology, Chittagong 4349, Bangladesh
| |
Collapse
|
162
|
Bhatt SD, Soni HB. Improving Classification Accuracy of Pulmonary Nodules using Simplified Deep Neural Network. Open Biomed Eng J 2021. [DOI: 10.2174/1874120702115010180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Background:
Lung cancer is among the major causes of death in the world. Early detection of lung cancer is a major challenge. These encouraged the development of Computer-Aided Detection (CAD) system.
Objectives:
We designed a CAD system for performance improvement in detecting and classifying pulmonary nodules. Though the system will not replace radiologists, it will be helpful to them in order to accurately diagnose lung cancer.
Methods:
The architecture comprises of two steps, among which in the first step CT scans are pre-processed and the candidates are extracted using the positive and negative annotations provided along with the LUNA16 dataset, and the second step consists of three different neural networks for classifying the pulmonary nodules obtained from the first step. The models in the second step consist of 2D-Convolutional Neural Network (2D-CNN), Visual Geometry Group-16 (VGG-16) and simplified VGG-16, which independently classify pulmonary nodules.
Results:
The classification accuracies achieved for 2D-CNN, VGG-16 and simplified VGG-16 were 99.12%, 98.17% and 99.60%, respectively.
Conclusion:
The integration of deep learning techniques along with machine learning and image processing can serve as a good means of extracting pulmonary nodules and classifying them with improved accuracy. Based on these results, it can be concluded that the transfer learning concept will improve system performance. In addition, performance improves proper designing of the CAD system by considering the amount of dataset and the availability of computing power.
Collapse
|
163
|
Tagi M, Tajiri M, Hamada Y, Wakata Y, Shan X, Ozaki K, Kubota M, Amano S, Sakaue H, Suzuki Y, Hirose J. Accuracy of an artificial intelligence-based model for estimating leftover liquid food in hospitals: validation study (Preprint). JMIR Form Res 2021; 6:e35991. [PMID: 35536638 PMCID: PMC9131145 DOI: 10.2196/35991] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2021] [Revised: 04/05/2022] [Accepted: 04/12/2022] [Indexed: 12/04/2022] Open
Abstract
Background An accurate evaluation of the nutritional status of malnourished hospitalized patients at a higher risk of complications, such as frailty or disability, is crucial. Visual methods of estimating food intake are popular for evaluating the nutritional status in clinical environments. However, from the perspective of accurate measurement, such methods are unreliable. Objective The accuracy of estimating leftover liquid food in hospitals using an artificial intelligence (AI)–based model was compared to that of visual estimation. Methods The accuracy of the AI-based model (AI estimation) was compared to that of the visual estimation method for thin rice gruel as staple food and fermented milk and peach juice as side dishes. A total of 576 images of liquid food (432 images of thin rice gruel, 72 of fermented milk, and 72 of peach juice) were used. The mean absolute error, root mean squared error, and coefficient of determination (R2) were used as metrics for determining the accuracy of the evaluation process. Welch t test and the confusion matrix were used to examine the difference of mean absolute error between AI and visual estimation. Results The mean absolute errors obtained through the AI estimation approach were 0.63 for fermented milk, 0.25 for peach juice, and 0.85 for the total. These were significantly smaller than those obtained using the visual estimation approach, which were 1.40 (P<.001) for fermented milk, 0.90 (P<.001) for peach juice, and 1.03 (P=.009) for the total. By contrast, the mean absolute error for thin rice gruel obtained using the AI estimation method (0.99) did not differ significantly from that obtained using visual estimation (0.99). The confusion matrix for thin rice gruel showed variation in the distribution of errors, indicating that the errors in the AI estimation were biased toward the case of many leftovers. The mean squared error for all liquid foods tended to be smaller for the AI estimation than for the visual estimation. Additionally, the coefficient of determination (R2) for fermented milk and peach juice tended to be larger for the AI estimation than for the visual estimation, and the R2 value for the total was equal in terms of accuracy between the AI and visual estimations. Conclusions The AI estimation approach achieved a smaller mean absolute error and root mean squared error and a larger coefficient of determination (R2) than the visual estimation approach for the side dishes. Additionally, the AI estimation approach achieved a smaller mean absolute error and root mean squared error compared to the visual estimation method, and the coefficient of determination (R2) was similar to that of the visual estimation method for the total. AI estimation measures liquid food intake in hospitals more precisely than visual estimation, but its accuracy in estimating staple food leftovers requires improvement.
Collapse
Affiliation(s)
- Masato Tagi
- Department of Medical Informatics, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Mari Tajiri
- Division of Nutrition, Tokushima University Hospital, Tokushima, Japan
| | - Yasuhiro Hamada
- Department of Therapeutic Nutrition, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Yoshifumi Wakata
- Health Information Management Center, National Hospital Organization Kyushu Medical Center, Fukuoka, Japan
- Medical Information Technology Center, Tokushima University Hospital, Tokushima, Japan
| | - Xiao Shan
- Medical Information Technology Center, Tokushima University Hospital, Tokushima, Japan
| | - Kazumi Ozaki
- Department of Oral Health Care Promotion, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | | | | | - Hiroshi Sakaue
- Division of Nutrition, Tokushima University Hospital, Tokushima, Japan
- Department of Nutrition and Metabolism, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| | - Yoshiko Suzuki
- Division of Nutrition, Tokushima University Hospital, Tokushima, Japan
| | - Jun Hirose
- Department of Medical Informatics, Institute of Biomedical Sciences, Tokushima University Graduate School, Tokushima, Japan
| |
Collapse
|
164
|
Cui X, Zheng S, Heuvelmans MA, Du Y, Sidorenkov G, Fan S, Li Y, Xie Y, Zhu Z, Dorrius MD, Zhao Y, Veldhuis RNJ, de Bock GH, Oudkerk M, van Ooijen PMA, Vliegenthart R, Ye Z. Performance of a deep learning-based lung nodule detection system as an alternative reader in a Chinese lung cancer screening program. Eur J Radiol 2021; 146:110068. [PMID: 34871936 DOI: 10.1016/j.ejrad.2021.110068] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 10/03/2021] [Accepted: 11/22/2021] [Indexed: 11/03/2022]
Abstract
OBJECTIVE To evaluate the performance of a deep learning-based computer-aided detection (DL-CAD) system in a Chinese low-dose CT (LDCT) lung cancer screening program. MATERIALS AND METHODS One-hundred-and-eighty individuals with a lung nodule on their baseline LDCT lung cancer screening scan were randomly mixed with screenees without nodules in a 1:1 ratio (total: 360 individuals). All scans were assessed by double reading and subsequently processed by an academic DL-CAD system. The findings of double reading and the DL-CAD system were then evaluated by two senior radiologists to derive the reference standard. The detection performance was evaluated by the Free Response Operating Characteristic curve, sensitivity and false-positive (FP) rate. The senior radiologists categorized nodules according to nodule diameter, type (solid, part-solid, non-solid) and Lung-RADS. RESULTS The reference standard consisted of 262 nodules ≥ 4 mm in 196 individuals; 359 findings were considered false positives. The DL-CAD system achieved a sensitivity of 90.1% with 1.0 FP/scan for detection of lung nodules regardless of size or type, whereas double reading had a sensitivity of 76.0% with 0.04 FP/scan (P = 0.001). The sensitivity for detection of nodules ≥ 4 - ≤ 6 mm was significantly higher with DL-CAD than with double reading (86.3% vs. 58.9% respectively; P = 0.001). Sixty-three nodules were only identified by the DL-CAD system, and 27 nodules only found by double reading. The DL-CAD system reached similar performance compared to double reading in Lung-RADS 3 (94.3% vs. 90.0%, P = 0.549) and Lung-RADS 4 nodules (100.0% vs. 97.0%, P = 1.000), but showed a higher sensitivity in Lung-RADS 2 (86.2% vs. 65.4%, P < 0.001). CONCLUSIONS The DL-CAD system can accurately detect pulmonary nodules on LDCT, with an acceptable false-positive rate of 1 nodule per scan and has higher detection performance than double reading. This DL-CAD system may assist radiologists in nodule detection in LDCT lung cancer screening.
Collapse
Affiliation(s)
- Xiaonan Cui
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China; University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Sunyi Zheng
- Westlake University, Artificial Intelligence and Biomedical Image Analysis Lab, School of Engineering, Hangzhou, People's Republic of China; Institute of Advanced Technology, Westlake Institute for Advanced Study, Hangzhou, People's Republic of China; University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, the Netherlands
| | - Marjolein A Heuvelmans
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Yihui Du
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Grigory Sidorenkov
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Shuxuan Fan
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Yanju Li
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Yongsheng Xie
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Zhongyuan Zhu
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Monique D Dorrius
- University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Yingru Zhao
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China
| | - Raymond N J Veldhuis
- University of Twente, Faculty of Electrical Engineering Mathematics and Computer Science, the Netherlands
| | - Geertruida H de Bock
- University of Groningen, University Medical Center Groningen, Department of Epidemiology, Groningen, the Netherlands
| | - Matthijs Oudkerk
- University of Groningen, Faculty of Medical Sciences, the Netherlands
| | - Peter M A van Ooijen
- University of Groningen, University Medical Center Groningen, Department of Radiation Oncology, Groningen, the Netherlands; University of Groningen, University Medical Center Groningen, Machine Learning Lab, Data Science Center in Health, Groningen, the Netherlands
| | - Rozemarijn Vliegenthart
- University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen, the Netherlands
| | - Zhaoxiang Ye
- Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Centre of Cancer, Key Laboratory of Cancer Prevention and Therapy, Department of Radiology, Tianjin, People's Republic of China.
| |
Collapse
|
165
|
Application of Multislice Spiral CT Imaging Technology in the Diagnosis of Patients with Chest Sarcoidosis. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:2887639. [PMID: 34858562 PMCID: PMC8632395 DOI: 10.1155/2021/2887639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2021] [Revised: 10/29/2021] [Accepted: 11/06/2021] [Indexed: 11/20/2022]
Abstract
Objective To study the qualitative value of multislice spiral CT (MSCT) dynamic enhancement scanning for solitary nodules (SPN) of the chest. Methods In this paper, 40 cases of chest nodules (including 25 cases of malignant nodules, 8 cases of inflammatory nodules, and 7 cases of benign nodules) were first scanned to determine the scope of nodules. At the two rates of 5 ml/s and 3 ml/s, CT dynamic enhancement scans were performed at the center of the nodule, and the CT values, peak enhancement (PH) and peak time (PT) before and after SPN enhancement, were recorded. It is mainly strengthened, with 80% (20/25) of net added value between 20 and 60 Hu, and 20% (5/25) >60 Hu or <20 Hu. The enhancement peak and peak time are (31.31 ± 10.62) Hu and 45 s, respectively. The time-density curve (T-DC) showed a slowly rising type; the inflammatory nodules were mainly severely strengthened, with a net increase of >40 Hu. The enhancement peak value is (49.25 ± 12.44) Hu, and the peak time is 80 s and 140 s. There is a characteristic of rising and falling and then rising in the curve. Conclusion Multislice spiral CT dynamic enhancement scan reflects the dynamic characteristics of chest nodular blood flow, which can be used to noninvasively evaluate and diagnose SPN.
Collapse
|
166
|
Kavithaa G, Balakrishnan P, Yuvaraj SA. Lung Cancer Detection and Improving Accuracy Using Linear Subspace Image Classification Algorithm. Interdiscip Sci 2021; 13:779-786. [PMID: 34351570 DOI: 10.1007/s12539-021-00468-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 07/15/2021] [Accepted: 07/23/2021] [Indexed: 06/13/2023]
Abstract
The ability to identify lung cancer at an early stage is critical, because it can help patients live longer. However, predicting the affected area while diagnosing cancer is a huge challenge. An intelligent computer-aided diagnostic system can be utilized to detect and diagnose lung cancer by detecting the damaged region. The suggested Linear Subspace Image Classification Algorithm (LSICA) approach classifies images in a linear subspace. This methodology is used to accurately identify the damaged region, and it involves three steps: image enhancement, segmentation, and classification. The spatial image clustering technique is used to quickly segment and identify the impacted area in the image. LSICA is utilized to determine the accuracy value of the affected region for classification purposes. Therefore, a lung cancer detection system with classification-dependent image processing is used for lung cancer CT imaging. Therefore, a new method to overcome these deficiencies of the process for detection using LSICA is proposed in this work on lung cancer. MATLAB has been used in all programs. A proposed system designed to easily identify the affected region with help of the classification technique to enhance and get more accurate results.
Collapse
Affiliation(s)
- G Kavithaa
- Department of Electronics and Communication Engineering, Government College of Engineering, Salem, Tamilnadu, India.
| | - P Balakrishnan
- Malla Reddy Engineering College for Women (Autonomous), Hyderabad, 500100, India
| | - S A Yuvaraj
- Department of ECE, GRT Institute of Engineering and Technology, Tiruttani, Tamilnadu, India
| |
Collapse
|
167
|
Lei Y, Zhang J, Shan H. Strided Self-Supervised Low-Dose CT Denoising for Lung Nodule Classification. PHENOMICS (CHAM, SWITZERLAND) 2021; 1:257-268. [PMID: 36939784 PMCID: PMC9590543 DOI: 10.1007/s43657-021-00025-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 09/04/2021] [Accepted: 09/14/2021] [Indexed: 11/26/2022]
Abstract
Lung nodule classification based on low-dose computed tomography (LDCT) images has attracted major attention thanks to the reduced radiation dose and its potential for early diagnosis of lung cancer from LDCT-based lung cancer screening. However, LDCT images suffer from severe noise, largely influencing the performance of lung nodule classification. Current methods combining denoising and classification tasks typically require the corresponding normal-dose CT (NDCT) images as the supervision for the denoising task, which is impractical in the context of clinical diagnosis using LDCT. To jointly train these two tasks in a unified framework without the NDCT images, this paper introduces a novel self-supervised method, termed strided Noise2Neighbors or SN2N, for blind medical image denoising and lung nodule classification, where the supervision is generated from noisy input images. More specifically, the proposed SN2N can construct the supervision information from its neighbors for LDCT denoising, which does not need NDCT images anymore. The proposed SN2N method enables joint training of LDCT denoising and lung nodule classification tasks by using self-supervised loss for denoising and cross-entropy loss for classification. Extensively experimental results on the Mayo LDCT dataset demonstrate that our SN2N achieves competitive performance compared with the supervised learning methods that have paired NDCT images as supervision. Moreover, our results on the LIDC-IDRI dataset show that the joint training of LDCT denoising and lung nodule classification significantly improves the performance of LDCT-based lung nodule classification.
Collapse
Affiliation(s)
- Yiming Lei
- Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, 200433 China
| | - Junping Zhang
- Shanghai Key Laboratory of Intelligent Information Processing, School of Computer Science, Fudan University, Shanghai, 200433 China
| | - Hongming Shan
- Institute of Science and Technology for Brain-Inspired Intelligence and MOE Frontiers Center for Brain Science, Fudan University, Shanghai, 200433 China
- Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai, 201210 China
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai, 201210 China
| |
Collapse
|
168
|
Naik A, Edla DR, Dharavath R. Prediction of Malignancy in Lung Nodules Using Combination of Deep, Fractal, and Gray-Level Co-Occurrence Matrix Features. BIG DATA 2021; 9:480-498. [PMID: 34191590 DOI: 10.1089/big.2020.0190] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Accurate detection of malignant tumor on lung computed tomography scans is crucial for early diagnosis of lung cancer and hence the faster recovery of patients. Several deep learning methodologies have been proposed for lung tumor detection, especially the convolution neural network (CNN). However, as CNN may lose some of the spatial relationships between features, we plan to combine texture features such as fractal features and gray-level co-occurrence matrix (GLCM) features along with the CNN features to improve the accuracy of tumor detection. Our framework has two advantages. First it fuses the advantage of CNN features with hand-crafted features such as fractal and GLCM features to gather the spatial information. Second, we reduce the overfitting effect by replacing the softmax layer with the support vector machine classifier. Experiments have shown that texture features such as fractal and GLCM when concatenated with deep features extracted from DenseNet architecture have a better accuracy of 95.42%, sensitivity of 97.49%, and specificity of 93.97%, and a positive predictive value of 95.96% with area under curve score of 0.95.
Collapse
Affiliation(s)
- Amrita Naik
- Department of Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| | - Damodar Reddy Edla
- Department of Computer Science and Engineering, National Institute of Technology, Ponda, Goa, India
| | - Ramesh Dharavath
- Department of Computer Science and Engineering, Indian Institute of Technology Dhanbad, Dhanbad, Jharkhand, India
| |
Collapse
|
169
|
Guo Z, Zhao L, Yuan J, Yu H. MSANet Multi-Scale Aggregation Network Integrating Spatial and Channel Information for Lung Nodule Detection. IEEE J Biomed Health Inform 2021; 26:2547-2558. [PMID: 34847048 DOI: 10.1109/jbhi.2021.3131671] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
AbstractImproving the detection accuracy of pulmonary nodules plays an important role in the diagnosis and early treatment of lung cancer. In this paper, a multiscale aggregation network (MSANet), which integrates spatial and channel information, is proposed for 3D pulmonary nodule detection. MSANet is designed to improve the network's ability to extract information and realize multiscale information fusion. First, multiscale aggregation interaction strategies are used to extract multilevel features and avoid feature fusion interference caused by large resolution differences. These strategies can effectively integrate the contextual information of adjacent resolutions and help to detect different sized nodules. Second, the feature extraction module is designed for efficient channel attention and self-calibrated convolutions (ECA-SC) to enhance the interchannel and local spatial information. ECA-SC also recalibrates the features in the feature extraction process, which can realize adaptive learning of feature weights and enhance the information extraction ability of features. Third, the distribution ranking (DR) loss is introduced as the classification loss function to solve the problem of imbalanced data between positive and negative samples. The proposed MSANet is comprehensively compared with other pulmonary nodule detection networks on the LUNA16 dataset, and a CPM score of 0.920 is obtained. The results show that the sensitivity for detecting pulmonary nodules is improved and that the average number of false-positives is effectively reduced. The proposed method has advantages in pulmonary nodule detection and can effectively assist radiologists in pulmonary nodule detection.
Collapse
|
170
|
Takata T, Sasaki H, Yamano H, Honma M, Shikano M. Study on Horizon Scanning with a Focus on the Development of AI-Based Medical Products: Citation Network Analysis. Ther Innov Regul Sci 2021; 56:263-275. [PMID: 34811711 PMCID: PMC8854249 DOI: 10.1007/s43441-021-00355-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Accepted: 11/08/2021] [Indexed: 01/22/2023]
Abstract
Horizon scanning for innovative technologies that might be applied to medical products and requires new assessment approaches to prepare regulators, allowing earlier access to the product for patients and an improved benefit/risk ratio. The purpose of this study is to confirm that citation network analysis and text mining for bibliographic information analysis can be used for horizon scanning of the rapidly developing field of AI-based medical technologies and extract the latest research trend information from the field. We classified 119,553 publications obtained from SCI constructed with the keywords “conventional,” “machine-learning,” or “deep-learning" and grouped them into 36 clusters, which demonstrated the academic landscape of AI applications. We also confirmed that one or two close clusters included the key articles on AI-based medical image analysis, suggesting that clusters specific to the technology were appropriately formed. Significant research progress could be detected as a quick increase in constituent papers and the number of citations of hub papers in the cluster. Then we tracked recent research trends by re-analyzing “young” clusters based on the average publication year of the constituent papers of each cluster. The latest topics in AI-based medical technologies include electrocardiograms and electroencephalograms (ECG/EEG), human activity recognition, natural language processing of clinical records, and drug discovery. We could detect rapid increase in research activity of AI-based ECG/EEG a few years prior to the issuance of the draft guidance by US-FDA. Our study showed that a citation network analysis and text mining of scientific papers can be a useful objective tool for horizon scanning of rapidly developing AI-based medical technologies.
Collapse
Affiliation(s)
- Takuya Takata
- Faculty of Pharmaceutical Sciences, Tokyo University of Science, Tokyo, Japan
| | - Hajime Sasaki
- Institute for Future Initiatives, The University of Tokyo, Tokyo, Japan
| | - Hiroko Yamano
- Institute for Future Initiatives, The University of Tokyo, Tokyo, Japan
| | - Masashi Honma
- Department of Pharmacy, The University of Tokyo Hospital, Tokyo, Japan
| | - Mayumi Shikano
- Faculty of Pharmaceutical Sciences, Tokyo University of Science, Tokyo, Japan.
| |
Collapse
|
171
|
An ensemble-based convolutional neural network model powered by a genetic algorithm for melanoma diagnosis. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06655-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractMelanoma is one of the main causes of cancer-related deaths. The development of new computational methods as an important tool for assisting doctors can lead to early diagnosis and effectively reduce mortality. In this work, we propose a convolutional neural network architecture for melanoma diagnosis inspired by ensemble learning and genetic algorithms. The architecture is designed by a genetic algorithm that finds optimal members of the ensemble. Additionally, the abstract features of all models are merged and, as a result, additional prediction capabilities are obtained. The diagnosis is achieved by combining all individual predictions. In this manner, the training process is implicitly regularized, showing better convergence, mitigating the overfitting of the model, and improving the generalization performance. The aim is to find the models that best contribute to the ensemble. The proposed approach also leverages data augmentation, transfer learning, and a segmentation algorithm. The segmentation can be performed without training and with a central processing unit, thus avoiding a significant amount of computational power, while maintaining its competitive performance. To evaluate the proposal, an extensive experimental study was conducted on sixteen skin image datasets, where state-of-the-art models were significantly outperformed. This study corroborated that genetic algorithms can be employed to effectively find suitable architectures for the diagnosis of melanoma, achieving in overall 11% and 13% better prediction performances compared to the closest model in dermoscopic and non-dermoscopic images, respectively. Finally, the proposal was implemented in a web application in order to assist dermatologists and it can be consulted at http://skinensemble.com.
Collapse
|
172
|
Ali S, Li J, Pei Y, Khurram R, Rehman KU, Rasool AB. State-of-the-Art Challenges and Perspectives in Multi-Organ Cancer Diagnosis via Deep Learning-Based Methods. Cancers (Basel) 2021; 13:5546. [PMID: 34771708 PMCID: PMC8583666 DOI: 10.3390/cancers13215546] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 10/28/2021] [Accepted: 10/29/2021] [Indexed: 11/16/2022] Open
Abstract
Thus far, the most common cause of death in the world is cancer. It consists of abnormally expanding areas that are threatening to human survival. Hence, the timely detection of cancer is important to expanding the survival rate of patients. In this survey, we analyze the state-of-the-art approaches for multi-organ cancer detection, segmentation, and classification. This article promptly reviews the present-day works in the breast, brain, lung, and skin cancer domain. Afterwards, we analytically compared the existing approaches to provide insight into the ongoing trends and future challenges. This review also provides an objective description of widely employed imaging techniques, imaging modality, gold standard database, and related literature on each cancer in 2016-2021. The main goal is to systematically examine the cancer diagnosis systems for multi-organs of the human body as mentioned. Our critical survey analysis reveals that greater than 70% of deep learning researchers attain promising results with CNN-based approaches for the early diagnosis of multi-organ cancer. This survey includes the extensive discussion part along with current research challenges, possible solutions, and prospects. This research will endow novice researchers with valuable information to deepen their knowledge and also provide the room to develop new robust computer-aid diagnosis systems, which assist health professionals in bridging the gap between rapid diagnosis and treatment planning for cancer patients.
Collapse
Affiliation(s)
- Saqib Ali
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (S.A.); (J.L.); (K.u.R.)
| | - Jianqiang Li
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (S.A.); (J.L.); (K.u.R.)
| | - Yan Pei
- Computer Science Division, University of Aizu, Aizuwakamatsu 965-8580, Japan
| | - Rooha Khurram
- Beijing Key Laboratory for Green Catalysis and Separation, Department of Chemistry and Chemical Engineering, Beijing University of Technology, Beijing 100124, China;
| | - Khalil ur Rehman
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China; (S.A.); (J.L.); (K.u.R.)
| | - Abdul Basit Rasool
- Research Institute for Microwave and Millimeter-Wave (RIMMS), National University of Sciences and Technology (NUST), Islamabad 44000, Pakistan;
| |
Collapse
|
173
|
Zhang Z, Yu S, Qin W, Liang X, Xie Y, Cao G. Self-supervised CT super-resolution with hybrid model. Comput Biol Med 2021; 138:104775. [PMID: 34666243 DOI: 10.1016/j.compbiomed.2021.104775] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 08/14/2021] [Accepted: 08/17/2021] [Indexed: 12/19/2022]
Abstract
Software-based methods can improve CT spatial resolution without changing the hardware of the scanner or increasing the radiation dose to the object. In this work, we aim to develop a deep learning (DL) based CT super-resolution (SR) method that can reconstruct low-resolution (LR) sinograms into high-resolution (HR) CT images. We mathematically analyzed imaging processes in the CT SR imaging problem and synergistically integrated the SR model in the sinogram domain and the deblur model in the image domain into a hybrid model (SADIR). SADIR incorporates the CT domain knowledge and is unrolled into a DL network (SADIR-Net). The SADIR-Net is a self-supervised network, which can be trained and tested with a single sinogram. SADIR-Net was evaluated through SR CT imaging of a Catphan700 physical phantom and a real porcine phantom, and its performance was compared to the other state-of-the-art (SotA) DL-based CT SR methods. On both phantoms, SADIR-Net obtains the highest information fidelity criterion (IFC), structure similarity index (SSIM), and lowest root-mean-square-error (RMSE). As to the modulation transfer function (MTF), SADIR-Net also obtains the best result and improves the MTF50% by 69.2% and MTF10% by 69.5% compared with FBP. Alternatively, the spatial resolutions at MTF50% and MTF10% from SADIR-Net can reach 91.3% and 89.3% of the counterparts reconstructed from the HR sinogram with FBP. The results show that SADIR-Net can provide performance comparable to the other SotA methods for CT SR reconstruction, especially in the case of extremely limited training data or even no data at all. Thus, the SADIR method could find use in improving CT resolution without changing the hardware of the scanner or increasing the radiation dose to the object.
Collapse
Affiliation(s)
- Zhicheng Zhang
- Department of Radiation Oncology, Stanford University, Stanford, 94305-5847, CA, USA; Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Shaode Yu
- College of Information and Communication Engineering, Communication University of China, Beijing 100024, China
| | - Wenjian Qin
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Xiaokun Liang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China
| | - Yaoqin Xie
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong 518055, China.
| | - Guohua Cao
- Virginia Polytechnic Institute & State University, Blacksburg, VA 24061, USA.
| |
Collapse
|
174
|
Hai J, Qiao K, Chen J, Liang N, Zhang L, Yan B. Multi-view features integrated 2D\3D Net for glomerulopathy histologic types classification using ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 212:106439. [PMID: 34695734 DOI: 10.1016/j.cmpb.2021.106439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Accepted: 09/18/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Early diagnoses and rational therapeutics of glomerulopathy can control progression and improve prognosis. The gold standard for the diagnosis of glomerulopathy is pathology by renal biopsy, which is invasive and has many contraindications. We aim to use renal ultrasonography for histologic classification of glomerulopathy. METHODS Ultrasonography can present multi-view sections of kidney, thus we proposed a multi-view and cross-domain integration strategy (CD-ConcatNet) to obtain more effective features and improve diagnosis accuracy. We creatively apply 2D group convolution and 3D convolution to process multiple 2D ultrasound images and extract multi-view features of renal ultrasound images. Cross-domain concatenation in each spatial resolution of feature maps is applied for more informative feature learning. RESULTS A total of 76 adult patients were collected and divided into training dataset (56 cases with 515 images) and validation dataset (20 cases with 180 images). We obtained the best mean accuracy of 0.83 and AUC of 0.8667 in the validation dataset. CONCLUSION Comparison experiments demonstrate that our designed CD-ConcatNet achieves the best classification performance and has great superiority on histologic types diagnosis. Results also prove that the integration of multi-view ultrasound images is beneficial for histologic classification and ultrasound images can indeed provide discriminating information for histologic diagnosis.
Collapse
Affiliation(s)
- Jinjin Hai
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, China
| | - Kai Qiao
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, China
| | - Jian Chen
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, China
| | - Ningning Liang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, China
| | - Lijie Zhang
- Department of Nephrology in First Affiliated Hospital of Zhengzhou University, China
| | - Bin Yan
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategy Support Force Information Engineering University, China.
| |
Collapse
|
175
|
He T, Yao J, Tian W, Yi Z, Tang W, Guo J. Cephalometric landmark detection by considering translational invariance in the two-stage framework. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.08.042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
176
|
Ding Y, Zheng W, Geng J, Qin Z, Choo KKR, Qin Z, Hou X. MVFusFra: A Multi-View Dynamic Fusion Framework for Multimodal Brain Tumor Segmentation. IEEE J Biomed Health Inform 2021; 26:1570-1581. [PMID: 34699375 DOI: 10.1109/jbhi.2021.3122328] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Medical practitioners generally rely on multimodal brain images, for example based on the information from the axial, coronal, and sagittal views, to inform brain tumor diagnosis. Hence, to further utilize the 3D information embedded in such datasets, this paper proposes a multi-view dynamic fusion framework (hereafter, referred to as MVFusFra) to improve the performance of brain tumor segmentation. The proposed framework consists of the following three key building blocks. First, a multi-view deep neural network architecture, which represents multi learning networks for segmenting the brain tumor from different views and each deep neural network corresponds to multi-modal brain images from one single view. Second, the dynamic decision fusion method, which is mainly used to fuse segmentation results from multi-views into an integrated method. Then, two different fusion methods (i.e., voting and weighted averaging) are used to evaluate the fusing process. Third, the multi-view fusion loss (comprising segmentation loss, transition loss, and decision loss) is proposed to facilitate the training process of multi-view learning networks, so as to ensure consistency in appearance and space, for both fusing segmentation results and the training of the learning network. We evaluate the performance of MVFusFra on the BRATS 2015 and BRATS 2018 datasets. Findings from the evaluations suggest that fusion results from multi-views achieve better performance than segmentation results from the single view, and also implying effectiveness of the proposed multi-view fusion loss. A comparative summary also shows that MVFusFra achieves better segmentation performance, in terms of efficiency, in comparison to other competing approaches.
Collapse
|
177
|
Dang VN, Galati F, Cortese R, Di Giacomo G, Marconetto V, Mathur P, Lekadir K, Lorenzi M, Prados F, Zuluaga MA. Vessel-CAPTCHA: An efficient learning framework for vessel annotation and segmentation. Med Image Anal 2021; 75:102263. [PMID: 34731770 DOI: 10.1016/j.media.2021.102263] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 10/04/2021] [Accepted: 10/06/2021] [Indexed: 12/22/2022]
Abstract
Deep learning techniques for 3D brain vessel image segmentation have not been as successful as in the segmentation of other organs and tissues. This can be explained by two factors. First, deep learning techniques tend to show poor performances at the segmentation of relatively small objects compared to the size of the full image. Second, due to the complexity of vascular trees and the small size of vessels, it is challenging to obtain the amount of annotated training data typically needed by deep learning methods. To address these problems, we propose a novel annotation-efficient deep learning vessel segmentation framework. The framework avoids pixel-wise annotations, only requiring weak patch-level labels to discriminate between vessel and non-vessel 2D patches in the training set, in a setup similar to the CAPTCHAs used to differentiate humans from bots in web applications. The user-provided weak annotations are used for two tasks: (1) to synthesize pixel-wise pseudo-labels for vessels and background in each patch, which are used to train a segmentation network, and (2) to train a classifier network. The classifier network allows to generate additional weak patch labels, further reducing the annotation burden, and it acts as a second opinion for poor quality images. We use this framework for the segmentation of the cerebrovascular tree in Time-of-Flight angiography (TOF) and Susceptibility-Weighted Images (SWI). The results show that the framework achieves state-of-the-art accuracy, while reducing the annotation time by ∼77% w.r.t. learning-based segmentation methods using pixel-wise labels for training.
Collapse
Affiliation(s)
- Vien Ngoc Dang
- Data Science Department, EURECOM, Sophia Antipolis, France; Artificial Intelligence in Medicine Lab, Facultat de Matemátiques I Informática, Universitat de Barcelona, Spain
| | | | - Rosa Cortese
- Queen Square MS Centre, Department of Neuroinflammation, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, UK; Department of Medicine, Surgery and Neuroscience, University of Siena, Italy
| | - Giuseppe Di Giacomo
- Data Science Department, EURECOM, Sophia Antipolis, France; Politecnico di Torino, Turin, Italy
| | - Viola Marconetto
- Data Science Department, EURECOM, Sophia Antipolis, France; Politecnico di Torino, Turin, Italy
| | - Prateek Mathur
- Data Science Department, EURECOM, Sophia Antipolis, France
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab, Facultat de Matemátiques I Informática, Universitat de Barcelona, Spain
| | - Marco Lorenzi
- Université Côte d'Azur, Inria Sophia Antipolis, Epione Research Group, Valbonne, France
| | - Ferran Prados
- Centre for Medical Image Computing, Department of Medical Physics and Bioengineering, University College London, UK; Queen Square MS Centre, Department of Neuroinflammation, UCL Queen Square Institute of Neurology, Faculty of Brain Sciences, University College London, UK; National Institute for Health Research, University College London Hospitals, Biomedical Research Centre, London, UK; e-health Center, Universitat Oberta de Catalunya, Barcelona, Spain
| | | |
Collapse
|
178
|
Hallinan JTPD, Feng M, Ng D, Sia SY, Tiong VTY, Jagmohan P, Makmur A, Thian YL. Detection of Pneumothorax with Deep Learning Models: Learning From Radiologist Labels vs Natural Language Processing Model Generated Labels. Acad Radiol 2021; 29:1350-1358. [PMID: 34649780 DOI: 10.1016/j.acra.2021.09.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2021] [Revised: 08/25/2021] [Accepted: 09/05/2021] [Indexed: 11/19/2022]
Abstract
RATIONALE AND OBJECTIVES To compare the performance of pneumothorax deep learning detection models trained with radiologist versus natural language processing (NLP) labels on the NIH ChestX-ray14 dataset. MATERIALS AND METHODS The ChestX-ray14 dataset consisted of 112,120 frontal chest radiographs with 5302 positive and 106, 818 negative labels for pneumothorax using NLP (dataset A). All 112,120 radiographs were also inspected by 4 radiologists leaving a visually confirmed set of 5,138 positive and 104,751 negative for pneumothorax (dataset B). Datasets A and B were used independently to train 3 convolutional neural network (CNN) architectures (ResNet-50, DenseNet-121 and EfficientNetB3). All models' area under the receiver operating characteristic curve (AUC) were evaluated with the official NIH test set and an external test set of 525 chest radiographs from our emergency department. RESULTS There were significantly higher AUCs on the NIH internal test set for CNN models trained with radiologist vs NLP labels across all architectures. AUCs for the NLP/radiologist-label models were 0.838 (95%CI:0.830, 0.846)/0.881 (95%CI:0.873,0.887) for ResNet-50 (p = 0.034), 0.839 (95%CI:0.831,0.847)/0.880 (95%CI:0.873,0.887) for DenseNet-121, and 0.869 (95%CI: 0.863,0.876)/0.943 (95%CI: 0.939,0.946) for EfficientNetB3 (p ≤0.001). Evaluation with the external test set also showed higher AUCs (p <0.001) for the CNN models trained with radiologist versus NLP labels across all architectures. The AUCs for the NLP/radiologist-label models were 0.686 (95%CI:0.632,0.740)/0.806 (95%CI:0.758,0.854) for ResNet-50, 0.736 (95%CI:0.686, 0.787)/0.871 (95%CI:0.830,0.912) for DenseNet-121, and 0.822 (95%CI: 0.775,0.868)/0.915 (95%CI: 0.882,0.948) for EfficientNetB3. CONCLUSION We demonstrated improved performance and generalizability of pneumothorax detection deep learning models trained with radiologist labels compared to models trained with NLP labels.
Collapse
Affiliation(s)
| | - Mengling Feng
- Saw Swee Hock School of Public Health, Institute of Data Science, Yong Loo Lin School of Medicine, National University Health System, National University of Singapore, Singapore
| | - Dianwen Ng
- Department of Diagnostic Imaging, National University Hospital, Singapore; Saw Swee Hock School of Public Health, Institute of Data Science, Yong Loo Lin School of Medicine, National University Health System, National University of Singapore, Singapore
| | - Soon Yiew Sia
- Department of Diagnostic Imaging, National University Hospital, Singapore
| | | | - Pooja Jagmohan
- Department of Diagnostic Imaging, National University Hospital, Singapore
| | - Andrew Makmur
- Department of Diagnostic Imaging, National University Hospital, Singapore
| | - Yee Liang Thian
- Department of Diagnostic Imaging, National University Hospital, Singapore
| |
Collapse
|
179
|
Zheng S, Shen Z, Pei C, Ding W, Lin H, Zheng J, Pan L, Zheng B, Huang L. Interpretative computer-aided lung cancer diagnosis: From radiology analysis to malignancy evaluation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 210:106363. [PMID: 34478913 DOI: 10.1016/j.cmpb.2021.106363] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Accepted: 08/13/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Computer-aided diagnosis (CAD) systems promote accurate diagnosis and reduce the burden of radiologists. A CAD system for lung cancer diagnosis includes nodule candidate detection and nodule malignancy evaluation. Recently, deep learning-based pulmonary nodule detection has reached satisfactory performance ready for clinical application. However, deep learning-based nodule malignancy evaluation depends on heuristic inference from low-dose computed tomography (LDCT) volume to malignant probability, and lacks clinical cognition. METHODS In this paper, we propose a joint radiology analysis and malignancy evaluation network called R2MNet to evaluate pulmonary nodule malignancy via the analysis of radiological characteristics. Radiological features are extracted as channel descriptor to highlight specific regions of the input volume that are critical for nodule malignancy evaluation. In addition, for model explanations, we propose channel-dependent activation mapping (CDAM) to visualize features and shed light on the decision process of deep neural networks (DNNs). RESULTS Experimental results on the lung image database consortium image collection (LIDC-IDRI) dataset demonstrate that the proposed method achieved an area under curve (AUC) of 96.27% and 97.52% on nodule radiology analysis and nodule malignancy evaluation, respectively. In addition, explanations of CDAM features proved that the shape and density of nodule regions are two critical factors that influence a nodule to be inferred as malignant. This process conforms to the diagnosis cognition of experienced radiologists. CONCLUSION The network inference process conforms to the diagnostic procedure of radiologists and increases the confidence of evaluation results by incorporating radiology analysis with nodule malignancy evaluation. Besides, model interpretation with CDAM features shed light on the focus regions of DNNs during the estimation of nodule malignancy probabilities.
Collapse
Affiliation(s)
- Shaohua Zheng
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Zhiqiang Shen
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Chenhao Pei
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Wangbin Ding
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Haojin Lin
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| | - Jiepeng Zheng
- Thoracic Department, Fujian Medical University Union Hospital, Fuzhou 350001, China
| | - Lin Pan
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China.
| | - Bin Zheng
- Thoracic Department, Fujian Medical University Union Hospital, Fuzhou 350001, China.
| | - Liqin Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou 350108, China
| |
Collapse
|
180
|
Fu Y, Xue P, Li N, Zhao P, Xu Z, Ji H, Zhang Z, Cui W, Dong E. Fusion of 3D lung CT and serum biomarkers for diagnosis of multiple pathological types on pulmonary nodules. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 210:106381. [PMID: 34496322 DOI: 10.1016/j.cmpb.2021.106381] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 08/24/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND AND OBJECTIVE Current researches on pulmonary nodules mainly focused on the binary-classification of benign and malignant pulmonary nodules. However, in clinical applications, it is not enough to judge whether pulmonary nodules are benign or malignant. In this paper, we proposed a fusion model based on the Lung Information Dataset Containing 3D CT Images and Serum Biomarkers (LIDCCISB) we constructed to accurately diagnose the types of pulmonary nodules in squamous cell carcinoma, adenocarcinoma, inflammation and other benign diseases. METHODS Using single modal information of lung 3D CT images and single modal information of Lung Tumor Biomarkers (LTBs) in LIDCCISB, a Multi-resolution 3D Multi-classification deep learning model (Mr-Mc) and a Multi-Layer Perceptron machine learning model (MLP) were constructed for diagnosing multiple pathological types of pulmonary nodules, respectively. To comprehensively use the double modal information of CT images and LTBs, we used transfer learning to fuse Mr-Mc and MLP, and constructed a multimodal information fusion model that could classify multiple pathological types of benign and malignant pulmonary nodules. RESULTS Experiments showed that the constructed Mr-Mc model can achieve an average accuracy of 0.805 and MLP model can achieve an average accuracy of 0.887. The fusion model was verified on a dataset containing 64 samples, and achieved an average accuracy of 0.906. CONCLUSIONS This is the first study to simultaneously use CT images and LTBs to diagnose multiple pathological types of benign and malignant pulmonary nodules, and experiments showed that our research was more advanced and more suitable for practical clinical applications.
Collapse
Affiliation(s)
- Yu Fu
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China
| | - Peng Xue
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China
| | - Ning Li
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China
| | - Peng Zhao
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China
| | - Zhuodong Xu
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan 250021, China
| | - Huizhong Ji
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China
| | - Zhili Zhang
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China
| | - Wentao Cui
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China.
| | - Enqing Dong
- School of Mechanical, Electrical & Information Engineering, Shandong University, Weihai 264209, China.
| |
Collapse
|
181
|
Ghosh R. Determining Top Fully Connected Layer's Hidden Neuron Count for Transfer Learning, Using Knowledge Distillation: a Case Study on Chest X-Ray Classification of Pneumonia and COVID-19. J Digit Imaging 2021; 34:1349-1358. [PMID: 34590199 PMCID: PMC8480458 DOI: 10.1007/s10278-021-00518-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 02/20/2021] [Accepted: 09/14/2021] [Indexed: 11/30/2022] Open
Abstract
Deep convolutional neural network (CNN)-assisted classification of images is one of the most discussed topics in recent years. Continuously innovation of neural network architectures is making it more correct and efficient every day. But training a neural network from scratch is very time-consuming and requires a lot of sophisticated computational equipment and power. So, using some pre-trained neural network as feature extractor for any image classification task or “transfer learning” is a very popular approach that saves time and computational power for practical use of CNNs. In this paper, an efficient way of building full model from any pre-trained model with high accuracy and low memory is proposed using knowledge distillation. Using the distilled knowledge of the last layer of pre-trained networks passes through fully connected layers with different hidden layers, followed by Softmax layer. The accuracies of student networks are mildly lesser than the whole models, but accuracy of student models clearly indicates the accuracy of the real network. In this way, the best number of hidden layers for dense layer for that pre-trained network with best accuracy and no-overfitting can be found with less time. Here, VGG16 and VGG19 (pre-trained upon “ImageNet” dataset) is tested upon chest X-rays (pneumonia and COVID-19). For finding the best total number of hidden layers, it saves nearly 44 min for VGG19 and 36 min and 37 s for VGG16 feature extractor.
Collapse
Affiliation(s)
- Ritwick Ghosh
- Department of Mining Engineering, Indian Institute of Engineering Science and Technology, Shibpur, P.O, Botanical Garden, Howrah, West Bengal, 711103, India.
| |
Collapse
|
182
|
Xu Y, Li Y, Yin H, Tang W, Fan G. Consecutive Serial Non-Contrast CT Scan-Based Deep Learning Model Facilitates the Prediction of Tumor Invasiveness of Ground-Glass Nodules. Front Oncol 2021; 11:725599. [PMID: 34568054 PMCID: PMC8461974 DOI: 10.3389/fonc.2021.725599] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 08/19/2021] [Indexed: 01/31/2023] Open
Abstract
Introduction Tumors are continuously evolving biological systems which can be monitored by medical imaging. Previous studies only focus on single timepoint images, whether the performance could be further improved by using serial noncontrast CT imaging obtained during nodule follow-up management remains unclear. In this study, we evaluated DL model for predicting tumor invasiveness of GGNs through analyzing time series CT images. Methods A total of 168 pathologically confirmed GGN cases (48 noninvasive lesions and 120 invasive lesions) were retrospectively collected and randomly assigned to the development dataset (n = 123) and independent testing dataset (n = 45). All patients underwent consecutive noncontrast CT examinations, and the baseline CT and 3-month follow-up CT images were collected. The gross region of interest (ROI) patches containing only tumor region and the full ROI patches including both tumor and peritumor regions were cropped from CT images. A baseline model was built on the image features and demographic features. Four DL models were proposed: two single-DL model using gross ROI (model 1) or full ROI patches (model 3) from baseline CT images, and two serial-DL models using gross ROI (model 2) or full ROI patches (model 4) from consecutive CT images (baseline scan and 3-month follow-up scan). In addition, a combined model integrating serial full ROI patches and clinical information was also constructed. The performance of these predictive models was assessed with respect to discrimination and clinical usefulness. Results The area under the curve (AUC) of the baseline model, models 1, 2, 3, and 4 were 0.562 [(95% confidence interval (C)], 0.406~0.710), 0.693 (95% CI, 0.538-0.822), 0.787 (95% CI, 0.639-0.895), 0.727 (95% CI, 0.573-0.849), and 0.811 (95% CI, 0.667-0.912) in the independent testing dataset, respectively. The results indicated that the peritumor region had potential to contribute to tumor invasiveness prediction, and the model performance was further improved by integrating imaging scans at multiple timepoints. Furthermore, the combined model showed best discrimination ability, with AUC, sensitivity, specificity, and accuracy achieving 0.831 (95% CI, 0.690-0.926), 86.7%, 73.3%, and 82.2%, respectively. Conclusion The DL model integrating full ROIs from serial CT images shows improved predictive performance in differentiating noninvasive from invasive GGNs than the model using only baseline CT images, which could benefit the clinical management of GGNs.
Collapse
Affiliation(s)
- Yao Xu
- Department of Radiology, Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Yu Li
- Department of Radiology, Dushuhu Public Hospital Affiliated of Soochow University, Suzhou, China
| | - Hongkun Yin
- Department of Advanced Research, Infervision Medical Technology Co. Ltd, Beijing, China
| | - Wen Tang
- Department of Advanced Research, Infervision Medical Technology Co. Ltd, Beijing, China
| | - Guohua Fan
- Department of Radiology, Second Affiliated Hospital of Soochow University, Suzhou, China
| |
Collapse
|
183
|
|
184
|
Zhang YN, XIA KR, LI CY, WEI BL, Zhang B. Review of Breast Cancer Pathologigcal Image Processing. BIOMED RESEARCH INTERNATIONAL 2021; 2021:1994764. [PMID: 34595234 PMCID: PMC8478535 DOI: 10.1155/2021/1994764] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2021] [Accepted: 08/24/2021] [Indexed: 11/17/2022]
Abstract
Breast cancer is one of the most common malignancies. Pathological image processing of breast has become an important means for early diagnosis of breast cancer. Using medical image processing to assist doctors to detect potential breast cancer as early as possible has always been a hot topic in the field of medical image diagnosis. In this paper, a breast cancer recognition method based on image processing is systematically expounded from four aspects: breast cancer detection, image segmentation, image registration, and image fusion. The achievements and application scope of supervised learning, unsupervised learning, deep learning, CNN, and so on in breast cancer examination are expounded. The prospect of unsupervised learning and transfer learning for breast cancer diagnosis is prospected. Finally, the privacy protection of breast cancer patients is put forward.
Collapse
Affiliation(s)
- Ya-nan Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
- HRG International Institute (Hefei) of Research and Innovation, Hefei 230000, China
| | - Ke-rui XIA
- HRG International Institute (Hefei) of Research and Innovation, Hefei 230000, China
| | - Chang-yi LI
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Ben-li WEI
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| | - Bing Zhang
- School of Computer Science and Technology, Harbin University of Science and Technology, Harbin 150080, China
| |
Collapse
|
185
|
Imbalance Modelling for Defect Detection in Ceramic Substrate by Using Convolutional Neural Network. Processes (Basel) 2021. [DOI: 10.3390/pr9091678] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
The complexity of defect detection in a ceramic substrate causes interclass and intraclass imbalance problems. Identifying flaws in ceramic substrates has traditionally relied on aberrant material occurrences and characteristic quantities. However, defect substrates in ceramic are typically small and have a wide variety of defect distributions, thereby making defect detection more challenging and difficult. Thus, we propose a method for defect detection based on unsupervised learning and deep learning. First, the proposed method conducts K-means clustering for grouping instances according to their inherent complex characteristics. Second, the distribution of rarely occurring instances is balanced by using augmentation filters. Finally, a convolutional neural network is trained by using the balanced dataset. The effectiveness of the proposed method was validated by comparing the results with those of other methods. Experimental results show that the proposed method outperforms other methods.
Collapse
|
186
|
Lalehzarian SP, Gowd AK, Liu JN. Machine learning in orthopaedic surgery. World J Orthop 2021; 12:685-699. [PMID: 34631452 PMCID: PMC8472446 DOI: 10.5312/wjo.v12.i9.685] [Citation(s) in RCA: 41] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 05/12/2021] [Accepted: 08/05/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence and machine learning in orthopaedic surgery has gained mass interest over the last decade or so. In prior studies, researchers have demonstrated that machine learning in orthopaedics can be used for different applications such as fracture detection, bone tumor diagnosis, detecting hip implant mechanical loosening, and grading osteoarthritis. As time goes on, the utility of artificial intelligence and machine learning algorithms, such as deep learning, continues to grow and expand in orthopaedic surgery. The purpose of this review is to provide an understanding of the concepts of machine learning and a background of current and future orthopaedic applications of machine learning in risk assessment, outcomes assessment, imaging, and basic science fields. In most cases, machine learning has proven to be just as effective, if not more effective, than prior methods such as logistic regression in assessment and prediction. With the help of deep learning algorithms, such as artificial neural networks and convolutional neural networks, artificial intelligence in orthopaedics has been able to improve diagnostic accuracy and speed, flag the most critical and urgent patients for immediate attention, reduce the amount of human error, reduce the strain on medical professionals, and improve care. Because machine learning has shown diagnostic and prognostic uses in orthopaedic surgery, physicians should continue to research these techniques and be trained to use these methods effectively in order to improve orthopaedic treatment.
Collapse
Affiliation(s)
- Simon P Lalehzarian
- The Chicago Medical School, Rosalind Franklin University of Medicine and Science, North Chicago, IL 60064, United States
| | - Anirudh K Gowd
- Department of Orthopaedic Surgery, Wake Forest Baptist Medical Center, Winston-Salem, NC 27157, United States
| | - Joseph N Liu
- USC Epstein Family Center for Sports Medicine, Keck Medicine of USC, Los Angeles, CA 90033, United States
| |
Collapse
|
187
|
An [18F]FDG-PET/CT deep learning method for fully automated detection of pathological mediastinal lymph nodes in lung cancer patients. Eur J Nucl Med Mol Imaging 2021; 49:881-888. [PMID: 34519888 PMCID: PMC8803782 DOI: 10.1007/s00259-021-05513-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 07/28/2021] [Indexed: 12/24/2022]
Abstract
Purpose The identification of pathological mediastinal lymph nodes is an important step in the staging of lung cancer, with the presence of metastases significantly affecting survival rates. Nodes are currently identified by a physician, but this process is time-consuming and prone to errors. In this paper, we investigate the use of artificial intelligence–based methods to increase the accuracy and consistency of this process. Methods Whole-body 18F-labelled fluoro-2-deoxyglucose ([18F]FDG) positron emission tomography/computed tomography ([18F]FDG-PET/CT) scans (Philips Gemini TF) from 134 patients were retrospectively analysed. The thorax was automatically located, and then slices were fed into a U-Net to identify candidate regions. These regions were split into overlapping 3D cubes, which were individually predicted as positive or negative using a 3D CNN. From these predictions, pathological mediastinal nodes could be identified. A second cohort of 71 patients was then acquired from a different, newer scanner (GE Discovery MI), and the performance of the model on this dataset was tested with and without transfer learning. Results On the test set from the first scanner, our model achieved a sensitivity of 0.87 (95% confidence intervals [0.74, 0.94]) with 0.41 [0.22, 0.71] false positives/patient. This was comparable to the performance of an expert. Without transfer learning, on the test set from the second scanner, the corresponding results were 0.53 [0.35, 0.70] and 0.24 [0.10, 0.49], respectively. With transfer learning, these metrics were 0.88 [0.73, 0.97] and 0.69 [0.43, 1.04], respectively. Conclusion Model performance was comparable to that of an expert on data from the same scanner. With transfer learning, the model can be applied to data from a different scanner. To our knowledge it is the first study of its kind to go directly from whole-body [18F]FDG-PET/CT scans to pathological mediastinal lymph node localisation. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-021-05513-x.
Collapse
|
188
|
Singh A, Sharma A, Ahmed A, Sundramoorthy AK, Furukawa H, Arya S, Khosla A. Recent Advances in Electrochemical Biosensors: Applications, Challenges, and Future Scope. BIOSENSORS 2021; 11:336. [PMID: 34562926 PMCID: PMC8472208 DOI: 10.3390/bios11090336] [Citation(s) in RCA: 177] [Impact Index Per Article: 44.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 08/25/2021] [Accepted: 08/31/2021] [Indexed: 05/11/2023]
Abstract
The electrochemical biosensors are a class of biosensors which convert biological information such as analyte concentration that is a biological recognition element (biochemical receptor) into current or voltage. Electrochemical biosensors depict propitious diagnostic technology which can detect biomarkers in body fluids such as sweat, blood, feces, or urine. Combinations of suitable immobilization techniques with effective transducers give rise to an efficient biosensor. They have been employed in the food industry, medical sciences, defense, studying plant biology, etc. While sensing complex structures and entities, a large data is obtained, and it becomes difficult to manually interpret all the data. Machine learning helps in interpreting large sensing data. In the case of biosensors, the presence of impurity affects the performance of the sensor and machine learning helps in removing signals obtained from the contaminants to obtain a high sensitivity. In this review, we discuss different types of biosensors along with their applications and the benefits of machine learning. This is followed by a discussion on the challenges, missing gaps in the knowledge, and solutions in the field of electrochemical biosensors. This review aims to serve as a valuable resource for scientists and engineers entering the interdisciplinary field of electrochemical biosensors. Furthermore, this review provides insight into the type of electrochemical biosensors, their applications, the importance of machine learning (ML) in biosensing, and challenges and future outlook.
Collapse
Affiliation(s)
- Anoop Singh
- Department of Physics, University of Jammu, Jammu 180006, India; (A.S.); (A.S.); (A.A.)
| | - Asha Sharma
- Department of Physics, University of Jammu, Jammu 180006, India; (A.S.); (A.S.); (A.A.)
| | - Aamir Ahmed
- Department of Physics, University of Jammu, Jammu 180006, India; (A.S.); (A.S.); (A.A.)
| | - Ashok K. Sundramoorthy
- Department of Chemistry, SRM Institute of Science and Technology, Kattankulathur 603203, India;
| | - Hidemitsu Furukawa
- Department of Mechanical System Engineering, Graduate School of Science and Engineering, Yamagata University, Yamagata 992-8510, Japan;
| | - Sandeep Arya
- Department of Physics, University of Jammu, Jammu 180006, India; (A.S.); (A.S.); (A.A.)
| | - Ajit Khosla
- Department of Mechanical System Engineering, Graduate School of Science and Engineering, Yamagata University, Yamagata 992-8510, Japan;
| |
Collapse
|
189
|
Abstract
In the past years, deep neural networks (DNN) have become popular in many disciplines such as computer vision (CV), natural language processing (NLP), etc. The evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to face numerous challenging problems. One of the most important challenges in the CV area is Medical Image Analysis in which DL models process medical images—such as magnetic resonance imaging (MRI), X-ray, computed tomography (CT), etc.—using convolutional neural networks (CNN) for diagnosis or detection of several diseases. The proper function of these models can significantly upgrade the health systems. However, recent studies have shown that CNN models are vulnerable under adversarial attacks with imperceptible perturbations. In this paper, we summarize existing methods for adversarial attacks, detections and defenses on medical imaging. Finally, we show that many attacks, which are undetectable by the human eye, can degrade the performance of the models, significantly. Nevertheless, some effective defense and attack detection methods keep the models safe to an extent. We end with a discussion on the current state-of-the-art and future challenges.
Collapse
|
190
|
Homayoun H, Ebrahimpour-komleh H. Automated Segmentation of Abnormal Tissues in Medical Images. J Biomed Phys Eng 2021; 11:415-424. [PMID: 34458189 PMCID: PMC8385212 DOI: 10.31661/jbpe.v0i0.958] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Accepted: 08/14/2018] [Indexed: 11/29/2022]
Abstract
Nowadays, medical image modalities are almost available everywhere. These modalities are bases of diagnosis of various diseases sensitive to specific tissue type.
Usually physicians look for abnormalities in these modalities in diagnostic procedures. Count and volume of abnormalities are very important for optimal treatment of patients.
Segmentation is a preliminary step for these measurements and also further analysis. Manual segmentation of abnormalities is cumbersome, error prone, and subjective. As a result,
automated segmentation of abnormal tissue is a need. In this study, representative techniques for segmentation of abnormal tissues are reviewed. Main focus is on the segmentation of
multiple sclerosis lesions, breast cancer masses, lung nodules, and skin lesions. As experimental results demonstrate, the methods based on deep learning techniques perform better than
other methods that are usually based on handy feature engineering techniques. Finally, the most common measures to evaluate automated abnormal tissue segmentation methods are reported
Collapse
Affiliation(s)
- Hassan Homayoun
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| | - Hossein Ebrahimpour-komleh
- PhD, Department of Computer Engineering, Faculty of Electrical and Computer Engineering, University of Kashan, Kashan, Iran
| |
Collapse
|
191
|
Gu Y, Chi J, Liu J, Yang L, Zhang B, Yu D, Zhao Y, Lu X. A survey of computer-aided diagnosis of lung nodules from CT scans using deep learning. Comput Biol Med 2021; 137:104806. [PMID: 34461501 DOI: 10.1016/j.compbiomed.2021.104806] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 08/23/2021] [Accepted: 08/23/2021] [Indexed: 12/17/2022]
Abstract
Lung cancer has one of the highest mortalities of all cancers. According to the National Lung Screening Trial, patients who underwent low-dose computed tomography (CT) scanning once a year for 3 years showed a 20% decline in lung cancer mortality. To further improve the survival rate of lung cancer patients, computer-aided diagnosis (CAD) technology shows great potential. In this paper, we summarize existing CAD approaches applying deep learning to CT scan data for pre-processing, lung segmentation, false positive reduction, lung nodule detection, segmentation, classification and retrieval. Selected papers are drawn from academic journals and conferences up to November 2020. We discuss the development of deep learning, describe several important aspects of lung nodule CAD systems and assess the performance of the selected studies on various datasets, which include LIDC-IDRI, LUNA16, LIDC, DSB2017, NLST, TianChi, and ELCAP. Overall, in the detection studies reviewed, the sensitivity of these techniques is found to range from 61.61% to 98.10%, and the value of the FPs per scan is between 0.125 and 32. In the selected classification studies, the accuracy ranges from 75.01% to 97.58%. The precision of the selected retrieval studies is between 71.43% and 87.29%. Based on performance, deep learning based CAD technologies for detection and classification of pulmonary nodules achieve satisfactory results. However, there are still many challenges and limitations remaining including over-fitting, lack of interpretability and insufficient annotated data. This review helps researchers and radiologists to better understand CAD technology for pulmonary nodule detection, segmentation, classification and retrieval. We summarize the performance of current techniques, consider the challenges, and propose directions for future high-impact research.
Collapse
Affiliation(s)
- Yu Gu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jingqian Chi
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
| | - Jiaqi Liu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Lidong Yang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Baohua Zhang
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Dahua Yu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Ying Zhao
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China
| | - Xiaoqi Lu
- Inner Mongolia Key Laboratory of Pattern Recognition and Intelligent Image Processing, School of Information Engineering, Inner Mongolia University of Science and Technology, Baotou, 014010, China; College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China
| |
Collapse
|
192
|
Yang Y, Zhang Q. Multiview framework using a 3D residual network for pulmonary micronodule malignancy risk classification. Biomed Mater Eng 2021; 31:253-267. [PMID: 32894237 DOI: 10.3233/bme-206005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
BACKGROUND Pulmonary micronodules account for 80% of all lung nodules. Generally, pulmonary micronodules in the early stages can be detected on thoracic computed tomography (CT) scans. Early diagnosis is crucial for improving the patient's survival rate. OBJECTIVE This paper aims to estimate the malignancy risk of pulmonary micronodules and potentially improve the survival rate. METHODS We extract 3D features of the CT images to obtain richer characteristics. Because superior performance can be achieved by having deep layers, we apply a 3D residual network (3D-ResNet) to classify the pulmonary micronodule. We construct a framework by using three parallel ResNets whose inputs are CT images in different regions of interest, i.e., the multiview of the image. To further evaluate the applicability of the framework, we make a five-category classification and achieve good performance. RESULTS By fusing different characteristics from three views, we achieve the area under the receiver operating characteristic curve (AUC) of 0.9681. Based on the results of the experiments, our 3D-ResNet has a better performance than 3D-VGG and 3D-Inception in terms of precision (the increase rates are 13.7% and 7.4%), AUC (the increase rates are 15.8% and 5.3%), and accuracy (the increase rates are 14.3% and 4.5%). Meanwhile, the recall performance is close to that of the 3D-Inception network. CONCLUSION Overall, the framework we propose has applicability and feasibility in pulmonary micronodule classification.
Collapse
Affiliation(s)
- Yujie Yang
- Institute of Computer and Information Engineering, Henan Normal University, Henan Province, Xinxiang, China.,Big Data Engineering Laboratory for Teaching Resources and Assessment of Education Quality, Henan Province, Xinxiang, China
| | - Qianqian Zhang
- Institute of Computer and Information Engineering, Henan Normal University, Henan Province, Xinxiang, China.,Big Data Engineering Laboratory for Teaching Resources and Assessment of Education Quality, Henan Province, Xinxiang, China
| |
Collapse
|
193
|
Yuan H, Fan Z, Wu Y, Cheng J. An efficient multi-path 3D convolutional neural network for false-positive reduction of pulmonary nodule detection. Int J Comput Assist Radiol Surg 2021; 16:2269-2277. [PMID: 34449037 DOI: 10.1007/s11548-021-02478-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 08/10/2021] [Indexed: 12/19/2022]
Abstract
PURPOSE Considering that false-positive and true pulmonary nodules are highly similar in shapes and sizes between lung computed tomography scans, we develop and evaluate a false-positive nodules reduction method applied to the computer-aided diagnosis system. METHODS To improve the pulmonary nodule diagnosis quality, a 3D convolutional neural networks (CNN) model is constructed to effectively extract spatial information of candidate nodule features through the hierarchical architecture. Furthermore, three paths corresponding to three receptive field sizes are adopted and concatenated in the network model, so that the feature information is fully extracted and fused to actively adapting to the changes in shapes, sizes, and contextual information between pulmonary nodules. In this way, the false-positive reduction is well implemented in pulmonary nodule detection. RESULTS Multi-path 3D CNN is performed on LUNA16 dataset, which achieves an average competitive performance metric score of 0.881, and excellent sensitivity of 0.952 and 0.962 occurs to 4, 8 FP/Scans. CONCLUSION By constructing a multi-path 3D CNN to fully extract candidate target features, it accurately identifies pulmonary nodules with different sizes, shapes, and background information. In addition, the proposed general framework is also suitable for similar 3D medical image classification tasks.
Collapse
Affiliation(s)
- Haiying Yuan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China.
| | - Zhongwei Fan
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China
| | - Yanrui Wu
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China
| | - Junpeng Cheng
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124, People's Republic of China
| |
Collapse
|
194
|
Wada K, Watanabe M, Shinchi M, Noguchi K, Mukoyoshi T, Matsuyama M, Arimura T, Ogino T. [A Study on Radiation Dermatitis Grading Support System Based on Deep Learning by Hybrid Generation Method]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2021; 77:787-794. [PMID: 34421066 DOI: 10.6009/jjrt.2021_jsrt_77.8.787] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
PURPOSE Radiation dermatitis is one of the most common adverse events in patients undergoing radiotherapy. However, the objective evaluation of this condition is difficult to provide because the clinical evaluation of radiation dermatitis is made by visual assessment based on Common Terminology Criteria for Adverse Events (CTCAE). Therefore, we created a radiation dermatitis grading support system (RDGS) using a deep convolutional neural network (DCNN) and then evaluated the effectiveness of the RDGS. METHODS The DCNN was trained with a dataset that comprised 647 clinical skin images graded with radiation dermatitis (Grades 1-4) at our center from April 2011 to May 2019. We created the datasets by mixing data augmentation images generated by image conversion and images generated by Poisson image editing using the hybrid generation method (Hyb) against lowvolume severe dermatitis (Grade 4). We then evaluated the classification accuracy of RDGS based on the hybrid generation method (Hyb-RDGS). RESULTS The overall accuracy of the Hyb-RDGS was 85.1%, which was higher than that of the data augmentation method generally used for image generation. CONCLUSION Effectiveness of the Hyb-RDGS using Poisson image editing was suggested. This result shows a possible supporting system for objective evaluation in grading radiation dermatitis.
Collapse
Affiliation(s)
- Kiyotaka Wada
- Medipolis Proton Therapy and Research Center.,Graduate School of Science and Engineering, Kagoshima University
| | - Mutsumi Watanabe
- Graduate School of Science and Engineering, Kagoshima University
| | - Masahiro Shinchi
- Graduate School of Science and Engineering, Kagoshima University
| | - Kousuke Noguchi
- Graduate School of Science and Engineering, Kagoshima University
| | | | | | | | | |
Collapse
|
195
|
Li J, Zhao X, Zhou G, Zhang M, Li D, Zhou Y. Evaluating the Work Productivity of Assembling Reinforcement through the Objects Detected by Deep Learning. SENSORS 2021; 21:s21165598. [PMID: 34451038 PMCID: PMC8402301 DOI: 10.3390/s21165598] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Revised: 08/06/2021] [Accepted: 08/17/2021] [Indexed: 11/30/2022]
Abstract
With the rapid development of deep learning, computer vision has assisted in solving a variety of problems in engineering construction. However, very few computer vision-based approaches have been proposed on work productivity’s evaluation. Therefore, taking a super high-rise project as a research case, using the detected object information obtained by a deep learning algorithm, a computer vision-based method for evaluating the productivity of assembling reinforcement is proposed. Firstly, a detector that can accurately distinguish various entities related to assembling reinforcement based on CenterNet is established. DLA34 is selected as the backbone. The mAP reaches 0.9682, and the speed of detecting a single image can be as low as 0.076 s. Secondly, the trained detector is used to detect the video frames, and images with detected boxes and documents with coordinates can be obtained. The position relationship between the detected work objects and detected workers is used to determine how many workers (N) have participated in the task. The time (T) to perform the process can be obtained from the change of coordinates of the work object. Finally, the productivity is evaluated according to N and T. The authors use four actual construction videos for validation, and the results show that the productivity evaluation is generally consistent with the actual conditions. The contribution of this research to construction management is twofold: On the one hand, without affecting the normal behavior of workers, a connection between construction individuals and work object is established, and the work productivity evaluation is realized. On the other hand, the proposed method has a positive effect on improving the efficiency of construction management.
Collapse
Affiliation(s)
- Jiaqi Li
- Faculty of Infrastructure Engineering, Dalian University of Technology, Dalian 116024, China; (J.L.); (G.Z.); (M.Z.); (D.L.)
| | - Xuefeng Zhao
- Faculty of Infrastructure Engineering, Dalian University of Technology, Dalian 116024, China; (J.L.); (G.Z.); (M.Z.); (D.L.)
- State Key Laboratory of Coastal and Offshore Engineering, Dalian University of Technology, Dalian 116024, China
- Correspondence:
| | - Guangyi Zhou
- Faculty of Infrastructure Engineering, Dalian University of Technology, Dalian 116024, China; (J.L.); (G.Z.); (M.Z.); (D.L.)
- Northeast Branch China Construction Eighth Engineering Division Corp., Ltd., Dalian 116019, China;
| | - Mingyuan Zhang
- Faculty of Infrastructure Engineering, Dalian University of Technology, Dalian 116024, China; (J.L.); (G.Z.); (M.Z.); (D.L.)
| | - Dongfang Li
- Faculty of Infrastructure Engineering, Dalian University of Technology, Dalian 116024, China; (J.L.); (G.Z.); (M.Z.); (D.L.)
- Northeast Branch China Construction Eighth Engineering Division Corp., Ltd., Dalian 116019, China;
| | - Yaochen Zhou
- Northeast Branch China Construction Eighth Engineering Division Corp., Ltd., Dalian 116019, China;
| |
Collapse
|
196
|
Lung Nodule Detection from Feature Engineering to Deep Learning in Thoracic CT Images: a Comprehensive Review. J Digit Imaging 2021; 33:655-677. [PMID: 31997045 DOI: 10.1007/s10278-020-00320-6] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023] Open
Abstract
This paper presents a systematic review of the literature focused on the lung nodule detection in chest computed tomography (CT) images. Manual detection of lung nodules by the radiologist is a sequential and time-consuming process. The detection is subjective and depends on the radiologist's experiences. Owing to the variation in shapes and appearances of a lung nodule, it is very difficult to identify the proper location of the nodule from a huge number of slices generated by the CT scanner. Small nodules (< 10 mm in diameter) may be missed by this manual detection process. Therefore, computer-aided diagnosis (CAD) system acts as a "second opinion" for the radiologists, by making final decision quickly with higher accuracy and greater confidence. The goal of this survey work is to present the current state of the artworks and their progress towards lung nodule detection to the researchers and readers in this domain. This review paper has covered the published works from 2009 to April 2018. Different nodule detection approaches are described elaborately in this work. Recently, it is observed that deep learning (DL)-based approaches are applied extensively for nodule detection and characterization. Therefore, emphasis has been given to convolutional neural network (CNN)-based DL approaches by describing different CNN-based networks.
Collapse
|
197
|
Logan R, Williams BG, Ferreira da Silva M, Indani A, Schcolnicov N, Ganguly A, Miller SJ. Deep Convolutional Neural Networks With Ensemble Learning and Generative Adversarial Networks for Alzheimer's Disease Image Data Classification. Front Aging Neurosci 2021; 13:720226. [PMID: 34483890 PMCID: PMC8416107 DOI: 10.3389/fnagi.2021.720226] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 07/15/2021] [Indexed: 11/26/2022] Open
Abstract
Recent advancements in deep learning (DL) have made possible new methodologies for analyzing massive datasets with intriguing implications in healthcare. Convolutional neural networks (CNN), which have proven to be successful supervised algorithms for classifying imaging data, are of particular interest in the neuroscience community for their utility in the classification of Alzheimer's disease (AD). AD is the leading cause of dementia in the aging population. There remains a critical unmet need for early detection of AD pathogenesis based on non-invasive neuroimaging techniques, such as magnetic resonance imaging (MRI) and positron emission tomography (PET). In this comprehensive review, we explore potential interdisciplinary approaches for early detection and provide insight into recent advances on AD classification using 3D CNN architectures for multi-modal PET/MRI data. We also consider the application of generative adversarial networks (GANs) to overcome pitfalls associated with limited data. Finally, we discuss increasing the robustness of CNNs by combining them with ensemble learning (EL).
Collapse
Affiliation(s)
- Robert Logan
- Pluripotent Diagnostics Corp. (PDx), Molecular Medicine Research Institute, Sunnyvale, CA, United States
- Eastern Nazarene College, Quincy, MA, United States
| | - Brian G. Williams
- Pluripotent Diagnostics Corp. (PDx), Molecular Medicine Research Institute, Sunnyvale, CA, United States
| | - Maria Ferreira da Silva
- Pluripotent Diagnostics Corp. (PDx), Molecular Medicine Research Institute, Sunnyvale, CA, United States
| | - Akash Indani
- Pluripotent Diagnostics Corp. (PDx), Molecular Medicine Research Institute, Sunnyvale, CA, United States
| | - Nicolas Schcolnicov
- Pluripotent Diagnostics Corp. (PDx), Molecular Medicine Research Institute, Sunnyvale, CA, United States
| | - Anjali Ganguly
- Pluripotent Diagnostics Corp. (PDx), Molecular Medicine Research Institute, Sunnyvale, CA, United States
| | - Sean J. Miller
- Pluripotent Diagnostics Corp. (PDx), Molecular Medicine Research Institute, Sunnyvale, CA, United States
| |
Collapse
|
198
|
Zuo W, Zhou F, He Y. An Embedded Multi-branch 3D Convolution Neural Network for False Positive Reduction in Lung Nodule Detection. J Digit Imaging 2021; 33:846-857. [PMID: 32095944 DOI: 10.1007/s10278-020-00326-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
Numerous lung nodule candidates can be produced through an automated lung nodule detection system. Classifying these candidates to reduce false positives is an important step in the detection process. The objective during this paper is to predict real nodules from a large number of pulmonary nodule candidates. Facing the challenge of the classification task, we propose a novel 3D convolution neural network (CNN) to reduce false positives in lung nodule detection. The novel 3D CNN includes embedded multiple branches in its structure. Each branch processes a feature map from a layer with different depths. All of these branches are cascaded at their ends; thus, features from different depth layers are combined to predict the categories of candidates. The proposed method obtains a competitive score in lung nodule candidate classification on LUNA16 dataset with an accuracy of 0.9783, a sensitivity of 0.8771, a precision of 0.9426, and a specificity of 0.9925. Moreover, a good performance on the competition performance metric (CPM) is also obtained with a score of 0.830. As a 3D CNN, the proposed model can learn complete and three-dimensional discriminative information about nodules and non-nodules to avoid some misidentification problems caused due to lack of spatial correlation information extracted from traditional methods or 2D networks. As an embedded multi-branch structure, the model is also more effective in recognizing the nodules of various shapes and sizes. As a result, the proposed method gains a competitive score on the false positive reduction in lung nodule detection and can be used as a reference for classifying nodule candidates.
Collapse
Affiliation(s)
- Wangxia Zuo
- The School of Instrumentation and Optoelectronics Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100083, China.,The College of Electrical Engineering, University of South China, Hengyang, 421001, Hunan, China
| | - Fuqiang Zhou
- The School of Instrumentation and Optoelectronics Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100083, China.
| | - Yuzhu He
- The School of Instrumentation and Optoelectronics Engineering, Beihang University, 37 Xueyuan Road, Haidian District, Beijing, 100083, China
| |
Collapse
|
199
|
Li X, Luo G, Wang W, Wang K, Gao Y, Li S. Hematoma Expansion Context Guided Intracranial Hemorrhage Segmentation and Uncertainty Estimation. IEEE J Biomed Health Inform 2021; 26:1140-1151. [PMID: 34375295 DOI: 10.1109/jbhi.2021.3103850] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Accurate segmentation of the Intracranial Hemorrhage (ICH) in non-contrast CT images is significant for computer-aided diagnosis. Although existing methods have achieved remarkable results, none of them ever incorporated ICH's prior information in their methods. In this work, for the first time, we proposed a novel SLice EXpansion Network (SLEX-Net), which incorporated hematoma expansion in the segmentation architecture by directly modeling the spatial variation of hematoma expansion. Firstly, a new module named Slice Expansion Module (SEM) was built, which can effectively transfer contextual information between two adjacent slices by mapping predictions from one slice to another. Secondly, to perceive label correlation information from both upper and lower slices, we designed two information transmission paths: forward and backward slice expansion. By further exploiting intra-slice and inter-slice context with the information paths, the network significantly improved the accuracy and continuity of segmentation results. Moreover, the proposed SLEX-Net enables us to conduct an uncertainty estimation with one-time inference, which is much more efficient than existing methods. We evaluated the proposed SLEX-Net and compared it with some state-of-the-art methods. Experimental results demonstrate that our method makes significant improvements in all metrics on segmentation performance and outperforms other existing uncertainty estimation methods in terms of several metrics. The code will be available from https://github.com/JohnleeHIT/SLEX-Net.
Collapse
|
200
|
Pennisi M, Kavasidis I, Spampinato C, Schinina V, Palazzo S, Salanitri FP, Bellitto G, Rundo F, Aldinucci M, Cristofaro M, Campioni P, Pianura E, Di Stefano F, Petrone A, Albarello F, Ippolito G, Cuzzocrea S, Conoci S. An explainable AI system for automated COVID-19 assessment and lesion categorization from CT-scans. Artif Intell Med 2021; 118:102114. [PMID: 34412837 PMCID: PMC8139171 DOI: 10.1016/j.artmed.2021.102114] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 05/06/2021] [Accepted: 05/12/2021] [Indexed: 01/20/2023]
Abstract
COVID-19 infection caused by SARS-CoV-2 pathogen has been a catastrophic pandemic outbreak all over the world, with exponential increasing of confirmed cases and, unfortunately, deaths. In this work we propose an AI-powered pipeline, based on the deep-learning paradigm, for automated COVID-19 detection and lesion categorization from CT scans. We first propose a new segmentation module aimed at automatically identifying lung parenchyma and lobes. Next, we combine the segmentation network with classification networks for COVID-19 identification and lesion categorization. We compare the model's classification results with those obtained by three expert radiologists on a dataset of 166 CT scans. Results showed a sensitivity of 90.3% and a specificity of 93.5% for COVID-19 detection, at least on par with those yielded by the expert radiologists, and an average lesion categorization accuracy of about 84%. Moreover, a significant role is played by prior lung and lobe segmentation, that allowed us to enhance classification performance by over 6 percent points. The interpretation of the trained AI models reveals that the most significant areas for supporting the decision on COVID-19 identification are consistent with the lesions clinically associated to the virus, i.e., crazy paving, consolidation and ground glass. This means that the artificial models are able to discriminate a positive patient from a negative one (both controls and patients with interstitial pneumonia tested negative to COVID) by evaluating the presence of those lesions into CT scans. Finally, the AI models are integrated into a user-friendly GUI to support AI explainability for radiologists, which is publicly available at http://perceivelab.com/covid-ai. The whole AI system is unique since, to the best of our knowledge, it is the first AI-based software, publicly available, that attempts to explain to radiologists what information is used by AI methods for making decisions and that proactively involves them in the decision loop to further improve the COVID-19 understanding.
Collapse
Affiliation(s)
| | | | | | - Vincenzo Schinina
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | | | | | | | | | - Marco Aldinucci
- Department of Computer Science, University of Turin, Turin, Italy
| | - Massimo Cristofaro
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Paolo Campioni
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Elisa Pianura
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Federica Di Stefano
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Ada Petrone
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Fabrizio Albarello
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | - Giuseppe Ippolito
- National Institute for infectious disease, "Lazzaro Spallanzani" Department, Rome, Italy
| | | | - Sabrina Conoci
- ChimBioFaram Department, University of Messina, Messina, Italy
| |
Collapse
|