1
|
Gao C, Wu L, Wu W, Huang Y, Wang X, Sun Z, Xu M, Gao C. Deep learning in pulmonary nodule detection and segmentation: a systematic review. Eur Radiol 2025; 35:255-266. [PMID: 38985185 PMCID: PMC11632000 DOI: 10.1007/s00330-024-10907-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 04/09/2024] [Accepted: 05/10/2024] [Indexed: 07/11/2024]
Abstract
OBJECTIVES The accurate detection and precise segmentation of lung nodules on computed tomography are key prerequisites for early diagnosis and appropriate treatment of lung cancer. This study was designed to compare detection and segmentation methods for pulmonary nodules using deep-learning techniques to fill methodological gaps and biases in the existing literature. METHODS This study utilized a systematic review with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines, searching PubMed, Embase, Web of Science Core Collection, and the Cochrane Library databases up to May 10, 2023. The Quality Assessment of Diagnostic Accuracy Studies 2 criteria was used to assess the risk of bias and was adjusted with the Checklist for Artificial Intelligence in Medical Imaging. The study analyzed and extracted model performance, data sources, and task-focus information. RESULTS After screening, we included nine studies meeting our inclusion criteria. These studies were published between 2019 and 2023 and predominantly used public datasets, with the Lung Image Database Consortium Image Collection and Image Database Resource Initiative and Lung Nodule Analysis 2016 being the most common. The studies focused on detection, segmentation, and other tasks, primarily utilizing Convolutional Neural Networks for model development. Performance evaluation covered multiple metrics, including sensitivity and the Dice coefficient. CONCLUSIONS This study highlights the potential power of deep learning in lung nodule detection and segmentation. It underscores the importance of standardized data processing, code and data sharing, the value of external test datasets, and the need to balance model complexity and efficiency in future research. CLINICAL RELEVANCE STATEMENT Deep learning demonstrates significant promise in autonomously detecting and segmenting pulmonary nodules. Future research should address methodological shortcomings and variability to enhance its clinical utility. KEY POINTS Deep learning shows potential in the detection and segmentation of pulmonary nodules. There are methodological gaps and biases present in the existing literature. Factors such as external validation and transparency affect the clinical application.
Collapse
Affiliation(s)
- Chuan Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Linyu Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Wei Wu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Yichao Huang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xinyue Wang
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China
| | - Zhichao Sun
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Maosheng Xu
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| | - Chen Gao
- The First Affiliated Hospital of Zhejiang Chinese Medical University (Zhejiang Provincial Hospital of Chinese Medicine), Hangzhou, China.
- The First School of Clinical Medicine, Zhejiang Chinese Medical University, Hangzhou, China.
| |
Collapse
|
2
|
Sang AY, Wang X, Paxton L. Technological Advancements in Augmented, Mixed, and Virtual Reality Technologies for Surgery: A Systematic Review. Cureus 2024; 16:e76428. [PMID: 39867005 PMCID: PMC11763273 DOI: 10.7759/cureus.76428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/26/2024] [Indexed: 01/28/2025] Open
Abstract
Recent advancements in artificial intelligence (AI) have shown significant potential in the medical field, although many applications are still in the research phase. This paper provides a comprehensive review of advancements in augmented reality (AR), mixed reality (MR), and virtual reality (VR) for surgical applications from 2019 to 2024 to accelerate the transition of AI from the research to the clinical phase. This paper also provides an overview of proposed databases for further use in extended reality (XR), which includes AR, MR, and VR, as well as a summary of typical research applications involving XR in surgical practices. Additionally, this paper concludes by discussing challenges and proposed solutions for the application of XR in the medical field. Although the areas of focus and specific implementations vary among AR, MR, and VR, current trends in XR focus mainly on reducing workload and minimizing surgical errors through navigation, training, and machine learning-based visualization. Through analyzing these trends, AR and MR have greater advantages for intraoperative surgical functions, whereas VR is limited to preoperative training and surgical preparation. VR faces additional limitations, and its use has been reduced in research since the first applications of XR, which likely suggests the same will happen with further development. Nonetheless, with increased access to technology and the ability to overcome the black box problem, XR's applications in medical fields and surgery will increase to guarantee further accuracy and precision while reducing risk and workload.
Collapse
Affiliation(s)
- Ashley Y Sang
- Biomedical Engineering, Miramonte High School, Orinda, USA
| | - Xinyao Wang
- Biomedical Engineering, The Harker School, San Jose, USA
| | - Lamont Paxton
- Private Practice, General Vascular Surgery Medical Group, Inc., San Leandro, USA
| |
Collapse
|
3
|
Wang J, Liu G, Zhou C, Cui X, Wang W, Wang J, Huang Y, Jiang J, Wang Z, Tang Z, Zhang A, Cui D. Application of artificial intelligence in cancer diagnosis and tumor nanomedicine. NANOSCALE 2024; 16:14213-14246. [PMID: 39021117 DOI: 10.1039/d4nr01832j] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Cancer is a major health concern due to its high incidence and mortality rates. Advances in cancer research, particularly in artificial intelligence (AI) and deep learning, have shown significant progress. The swift evolution of AI in healthcare, especially in tools like computer-aided diagnosis, has the potential to revolutionize early cancer detection. This technology offers improved speed, accuracy, and sensitivity, bringing a transformative impact on cancer diagnosis, treatment, and management. This paper provides a concise overview of the application of artificial intelligence in the realms of medicine and nanomedicine, with a specific emphasis on the significance and challenges associated with cancer diagnosis. It explores the pivotal role of AI in cancer diagnosis, leveraging structured, unstructured, and multimodal fusion data. Additionally, the article delves into the applications of AI in nanomedicine sensors and nano-oncology drugs. The fundamentals of deep learning and convolutional neural networks are clarified, underscoring their relevance to AI-driven cancer diagnosis. A comparative analysis is presented, highlighting the accuracy and efficiency of traditional methods juxtaposed with AI-based approaches. The discussion not only assesses the current state of AI in cancer diagnosis but also delves into the challenges faced by AI in this context. Furthermore, the article envisions the future development direction and potential application of artificial intelligence in cancer diagnosis, offering a hopeful prospect for enhanced cancer detection and improved patient prognosis.
Collapse
Affiliation(s)
- Junhao Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Guan Liu
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Cheng Zhou
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Xinyuan Cui
- Imaging Department of Rui Jin Hospital, Medical School of Shanghai Jiao Tong University, Shanghai, China
| | - Wei Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Jiulin Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Yixin Huang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Jinlei Jiang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Zhitao Wang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Zengyi Tang
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
| | - Amin Zhang
- Department of Food Science & Technology, School of Agriculture & Biology, Shanghai Jiao Tong University, Shanghai, China.
| | - Daxiang Cui
- School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai, China.
- School of Medicine, Henan University, Henan, China
| |
Collapse
|
4
|
Tandon R, Agrawal S, Rathore NPS, Mishra AK, Jain SK. A systematic review on deep learning-based automated cancer diagnosis models. J Cell Mol Med 2024; 28:e18144. [PMID: 38426930 PMCID: PMC10906380 DOI: 10.1111/jcmm.18144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 12/08/2023] [Accepted: 01/16/2024] [Indexed: 03/02/2024] Open
Abstract
Deep learning is gaining importance due to its wide range of applications. Many researchers have utilized deep learning (DL) models for the automated diagnosis of cancer patients. This paper provides a systematic review of DL models for automated diagnosis of cancer patients. Initially, various DL models for cancer diagnosis are presented. Five major categories of cancers such as breast, lung, liver, brain and cervical cancer are considered. As these categories of cancers have a very high percentage of occurrences with high mortality rate. The comparative analysis of different types of DL models is drawn for the diagnosis of cancer at early stages by considering the latest research articles from 2016 to 2022. After comprehensive comparative analysis, it is found that most of the researchers achieved appreciable accuracy with implementation of the convolutional neural network model. These utilized the pretrained models for automated diagnosis of cancer patients. Various shortcomings with the existing DL-based automated cancer diagnosis models are also been presented. Finally, future directions are discussed to facilitate further research for automated diagnosis of cancer patients.
Collapse
Affiliation(s)
| | | | | | - Abhinava K. Mishra
- Molecular, Cellular and Developmental Biology DepartmentUniversity of California Santa BarbaraSanta BarbaraCaliforniaUSA
| | | |
Collapse
|
5
|
Haque AU, Ghani S, Saeed M, Schloer H. Pneumonia classification: A limited data approach for global understanding. Heliyon 2024; 10:e26177. [PMID: 38390159 PMCID: PMC10881372 DOI: 10.1016/j.heliyon.2024.e26177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 01/29/2024] [Accepted: 02/08/2024] [Indexed: 02/24/2024] Open
Abstract
As the human race has advanced, so too have the ailments that afflict it. Diseases such as pneumonia, once considered to be basic flu or allergies, have evolved into more severe forms, including SARs and COVID-19, presenting significant risks to people worldwide. In our study, we focused on categorizing pneumonia-related inflammation in chest X-rays (CXR) using a relatively small dataset. Our approach was to encompass a comprehensive view, addressing every potential area of inflammation in the CXR. We employed enhanced class activation maps (mCAM) to meet the clinical criteria for classification rationale. Our model incorporates capsule network clusters (CNsC), which aids in learning different aspects such as geometry, orientation, and position of the inflammation seen in the CXR. Our Capsule Network Clusters (CNsC) rapidly interpret various perspectives in a single CXR without needing image augmentation, a common necessity in existing detection models. This approach significantly cuts down on training and evaluation durations. We conducted thorough testing using the RSNA pneumonia dataset of CXR images, achieving accuracy and recall rates as high as 98.3% and 99.5% in our conclusive tests. Additionally, we observed encouraging outcomes when applying our trained model to standard X-ray images obtained from medical clinics.
Collapse
Affiliation(s)
- Anwar Ul Haque
- SMCS, Institute of Business Administration Karachi, Pakistan
| | - Sayeed Ghani
- SMCS, Institute of Business Administration Karachi, Pakistan
| | - Muhammad Saeed
- Department of Computer Science, University of Karachi, Pakistan
| | | |
Collapse
|
6
|
Siddiqui EA, Chaurasia V, Shandilya M. Classification of lung cancer computed tomography images using a 3-dimensional deep convolutional neural network with multi-layer filter. J Cancer Res Clin Oncol 2023; 149:11279-11294. [PMID: 37368121 DOI: 10.1007/s00432-023-04992-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 06/15/2023] [Indexed: 06/28/2023]
Abstract
Lung cancer creates pulmonary nodules in the patient's lung, which may be diagnosed early on using computer-aided diagnostics. A novel automated pulmonary nodule diagnosis technique using three-dimensional deep convolutional neural networks and multi-layered filter has been presented in this paper. For the suggested automated diagnosis of lung nodule, volumetric computed tomographic images are employed. The proposed approach generates three-dimensional feature layers, which retain the temporal links between adjacent slices of computed tomographic images. The use of several activation functions at different levels of the proposed network results in increased feature extraction and efficient classification. The suggested approach divides lung volumetric computed tomography pictures into malignant and benign categories. The suggested technique's performance is evaluated using three commonly used datasets in the domain: LUNA 16, LIDC-IDRI, and TCIA. The proposed method outperforms the state-of-the-art in terms of accuracy, sensitivity, specificity, F-1 score, false-positive rate, false-negative rate, and error rate.
Collapse
Affiliation(s)
| | | | - Madhu Shandilya
- Maulana Azad National Institute of Technology, Bhopal, 462003, India
| |
Collapse
|
7
|
Javed MA, Bin Liaqat H, Meraj T, Alotaibi A, Alshammari M. Identification and Classification of Lungs Focal Opacity Using CNN Segmentation and Optimal Feature Selection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:6357252. [PMID: 37538561 PMCID: PMC10396675 DOI: 10.1155/2023/6357252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 09/07/2022] [Accepted: 09/26/2022] [Indexed: 08/05/2023]
Abstract
Lung cancer is one of the deadliest cancers around the world, with high mortality rate in comparison to other cancers. A lung cancer patient's survival probability in late stages is very low. However, if it can be detected early, the patient survival rate can be improved. Diagnosing lung cancer early is a complicated task due to having the visual similarity of lungs nodules with trachea, vessels, and other surrounding tissues that leads toward misclassification of lung nodules. Therefore, correct identification and classification of nodules is required. Previous studies have used noisy features, which makes results comprising. A predictive model has been proposed to accurately detect and classify the lung nodules to address this problem. In the proposed framework, at first, the semantic segmentation was performed to identify the nodules in images in the Lungs image database consortium (LIDC) dataset. Optimal features for classification include histogram oriented gradients (HOGs), local binary patterns (LBPs), and geometric features are extracted after segmentation of nodules. The results shown that support vector machines performed better in identifying the nodules than other classifiers, achieving the highest accuracy of 97.8% with sensitivity of 100%, specificity of 93%, and false positive rate of 6.7%.
Collapse
Affiliation(s)
| | - Hannan Bin Liaqat
- Department of Information Technology, Division of Science and Technology University of Education, Township Campus Lahore, Lahore, Pakistan
| | - Talha Meraj
- Department of Computer Science, COMSATS University Islamabad—Wah Campus, Wah Cantt, Rawalpindi 47040, Pakistan
| | - Aziz Alotaibi
- Department of Computer Science, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| | - Majid Alshammari
- Department of Information Technology, College of Computers and Information Technology, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
| |
Collapse
|
8
|
Niu C, Wang G. Unsupervised contrastive learning based transformer for lung nodule detection. Phys Med Biol 2022; 67:10.1088/1361-6560/ac92ba. [PMID: 36113445 PMCID: PMC10040209 DOI: 10.1088/1361-6560/ac92ba] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Accepted: 09/16/2022] [Indexed: 11/12/2022]
Abstract
Objective.Early detection of lung nodules with computed tomography (CT) is critical for the longer survival of lung cancer patients and better quality of life. Computer-aided detection/diagnosis (CAD) is proven valuable as a second or concurrent reader in this context. However, accurate detection of lung nodules remains a challenge for such CAD systems and even radiologists due to not only the variability in size, location, and appearance of lung nodules but also the complexity of lung structures. This leads to a high false-positive rate with CAD, compromising its clinical efficacy.Approach.Motivated by recent computer vision techniques, here we present a self-supervised region-based 3D transformer model to identify lung nodules among a set of candidate regions. Specifically, a 3D vision transformer is developed that divides a CT volume into a sequence of non-overlap cubes, extracts embedding features from each cube with an embedding layer, and analyzes all embedding features with a self-attention mechanism for the prediction. To effectively train the transformer model on a relatively small dataset, the region-based contrastive learning method is used to boost the performance by pre-training the 3D transformer with public CT images.Results.Our experiments show that the proposed method can significantly improve the performance of lung nodule screening in comparison with the commonly used 3D convolutional neural networks.Significance.This study demonstrates a promising direction to improve the performance of current CAD systems for lung nodule detection.
Collapse
Affiliation(s)
- Chuang Niu
- Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| | - Ge Wang
- Biomedical Imaging Center, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, New York, United States of America
| |
Collapse
|
9
|
Chen J, Li Y, Guo L, Zhou X, Zhu Y, He Q, Han H, Feng Q. Machine learning techniques for CT imaging diagnosis of novel coronavirus pneumonia: a review. Neural Comput Appl 2022; 36:1-19. [PMID: 36159188 PMCID: PMC9483435 DOI: 10.1007/s00521-022-07709-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 08/04/2022] [Indexed: 11/20/2022]
Abstract
Since 2020, novel coronavirus pneumonia has been spreading rapidly around the world, bringing tremendous pressure on medical diagnosis and treatment for hospitals. Medical imaging methods, such as computed tomography (CT), play a crucial role in diagnosing and treating COVID-19. A large number of CT images (with large volume) are produced during the CT-based medical diagnosis. In such a situation, the diagnostic judgement by human eyes on the thousands of CT images is inefficient and time-consuming. Recently, in order to improve diagnostic efficiency, the machine learning technology is being widely used in computer-aided diagnosis and treatment systems (i.e., CT Imaging) to help doctors perform accurate analysis and provide them with effective diagnostic decision support. In this paper, we comprehensively review these frequently used machine learning methods applied in the CT Imaging Diagnosis for the COVID-19, discuss the machine learning-based applications from the various kinds of aspects including the image acquisition and pre-processing, image segmentation, quantitative analysis and diagnosis, and disease follow-up and prognosis. Moreover, we also discuss the limitations of the up-to-date machine learning technology in the context of CT imaging computer-aided diagnosis.
Collapse
Affiliation(s)
- Jingjing Chen
- Zhejiang University City College, Hangzhou, China
- Zhijiang College of Zhejiang University of Technology, Shaoxing, China
| | - Yixiao Li
- Faculty of Science, Zhejiang University of Technology, Hangzhou, China
| | - Lingling Guo
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Xiaokang Zhou
- Faculty of Data Science, Shiga University, Hikone, Japan
- RIKEN Center for Advanced Intelligence Project, Tokyo, Japan
| | - Yihan Zhu
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Qingfeng He
- School of Pharmacy, Fudan University, Shanghai, China
| | - Haijun Han
- School of Medicine, Zhejiang University City College, Hangzhou, China
| | - Qilong Feng
- College of Chemical Engineering, Zhejiang University of Technology, Hangzhou, China
| |
Collapse
|
10
|
Neural architecture search for pneumonia diagnosis from chest X-rays. Sci Rep 2022; 12:11309. [PMID: 35788644 PMCID: PMC9252574 DOI: 10.1038/s41598-022-15341-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 06/22/2022] [Indexed: 11/25/2022] Open
Abstract
Pneumonia is one of the diseases that causes the most fatalities worldwide, especially in children. Recently, pneumonia-caused deaths have increased dramatically due to the novel Coronavirus global pandemic. Chest X-ray (CXR) images are one of the most readily available and common imaging modality for the detection and identification of pneumonia. However, the detection of pneumonia from chest radiography is a difficult task even for experienced radiologists. Artificial Intelligence (AI) based systems have great potential in assisting in quick and accurate diagnosis of pneumonia from chest X-rays. The aim of this study is to develop a Neural Architecture Search (NAS) method to find the best convolutional architecture capable of detecting pneumonia from chest X-rays. We propose a Learning by Teaching framework inspired by the teaching-driven learning methodology from humans, and conduct experiments on a pneumonia chest X-ray dataset with over 5000 images. Our proposed method yields an area under ROC curve (AUC) of 97.6% for pneumonia detection, which improves upon previous NAS methods by 5.1% (absolute).
Collapse
|
11
|
Tomassini S, Falcionelli N, Sernani P, Burattini L, Dragoni AF. Lung nodule diagnosis and cancer histology classification from computed tomography data by convolutional neural networks: A survey. Comput Biol Med 2022; 146:105691. [PMID: 35691714 DOI: 10.1016/j.compbiomed.2022.105691] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Revised: 05/26/2022] [Accepted: 05/31/2022] [Indexed: 11/30/2022]
Abstract
Lung cancer is among the deadliest cancers. Besides lung nodule classification and diagnosis, developing non-invasive systems to classify lung cancer histological types/subtypes may help clinicians to make targeted treatment decisions timely, having a positive impact on patients' comfort and survival rate. As convolutional neural networks have proven to be responsible for the significant improvement of the accuracy in lung cancer diagnosis, with this survey we intend to: show the contribution of convolutional neural networks not only in identifying malignant lung nodules but also in classifying lung cancer histological types/subtypes directly from computed tomography data; point out the strengths and weaknesses of slice-based and scan-based approaches employing convolutional neural networks; and highlight the challenges and prospective solutions to successfully apply convolutional neural networks for such classification tasks. To this aim, we conducted a comprehensive analysis of relevant Scopus-indexed studies involved in lung nodule diagnosis and cancer histology classification up to January 2022, dividing the investigation in convolutional neural network-based approaches fed with planar or volumetric computed tomography data. Despite the application of convolutional neural networks in lung nodule diagnosis and cancer histology classification is a valid strategy, some challenges raised, mainly including the lack of publicly-accessible annotated data, together with the lack of reproducibility and clinical interpretability. We believe that this survey will be helpful for future studies involved in lung nodule diagnosis and cancer histology classification prior to lung biopsy by means of convolutional neural networks.
Collapse
Affiliation(s)
- Selene Tomassini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Nicola Falcionelli
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Paolo Sernani
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Laura Burattini
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| | - Aldo Franco Dragoni
- Department of Information Engineering, Engineering Faculty, Università Politecnica delle Marche, Ancona, Italy.
| |
Collapse
|
12
|
Guail AAA, Jinsong G, Oloulade BM, Al-Sabri R. A Principal Neighborhood Aggregation-Based Graph Convolutional Network for Pneumonia Detection. SENSORS (BASEL, SWITZERLAND) 2022; 22:3049. [PMID: 35459035 PMCID: PMC9026930 DOI: 10.3390/s22083049] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 04/10/2022] [Accepted: 04/12/2022] [Indexed: 12/04/2022]
Abstract
Pneumonia is one of the main causes of child mortality in the world and has been reported by the World Health Organization (WHO) to be the cause of one-third of child deaths in India. Designing an automated classification system to detect pneumonia has become a worthwhile research topic. Numerous deep learning models have attempted to detect pneumonia by applying convolutional neural networks (CNNs) to X-ray radiographs, as they are essentially images and have achieved great performances. However, they failed to capture higher-order feature information of all objects based on the X-ray images because the topology of the X-ray images' dimensions does not always come with some spatially regular locality properties, which makes defining a spatial kernel filter in X-ray images non-trivial. This paper proposes a principal neighborhood aggregation-based graph convolutional network (PNA-GCN) for pneumonia detection. In PNA-GCN, we propose a new graph-based feature construction utilizing the transfer learning technique to extract features and then construct the graph from images. Then, we propose a graph convolutional network with principal neighborhood aggregation. We integrate multiple aggregation functions in a single layer with degree-scalers to capture more effective information in a single layer to exploit the underlying properties of the graph structure. The experimental results show that PNA-GCN can perform best in the pneumonia detection task on a real-world dataset against the state-of-the-art baseline methods.
Collapse
Affiliation(s)
| | - Gui Jinsong
- School of Computer Science and Engineering, Central South University, Changsha 410083, China; (A.A.A.G.); (B.M.O.); (R.A.-S.)
| | | | | |
Collapse
|
13
|
COVID-19 Detection in Chest X-ray Images Using a New Channel Boosted CNN. Diagnostics (Basel) 2022; 12:diagnostics12020267. [PMID: 35204358 PMCID: PMC8871483 DOI: 10.3390/diagnostics12020267] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 01/07/2022] [Accepted: 01/16/2022] [Indexed: 02/01/2023] Open
Abstract
COVID-19 is a respiratory illness that has affected a large population worldwide and continues to have devastating consequences. It is imperative to detect COVID-19 at the earliest opportunity to limit the span of infection. In this work, we developed a new CNN architecture STM-RENet to interpret the radiographic patterns from X-ray images. The proposed STM-RENet is a block-based CNN that employs the idea of split–transform–merge in a new way. In this regard, we have proposed a new convolutional block STM that implements the region and edge-based operations separately, as well as jointly. The systematic use of region and edge implementations in combination with convolutional operations helps in exploring region homogeneity, intensity inhomogeneity, and boundary-defining features. The learning capacity of STM-RENet is further enhanced by developing a new CB-STM-RENet that exploits channel boosting and learns textural variations to effectively screen the X-ray images of COVID-19 infection. The idea of channel boosting is exploited by generating auxiliary channels from the two additional CNNs using Transfer Learning, which are then concatenated to the original channels of the proposed STM-RENet. A significant performance improvement is shown by the proposed CB-STM-RENet in comparison to the standard CNNs on three datasets, especially on the stringent CoV-NonCoV-15k dataset. The good detection rate (97%), accuracy (96.53%), and reasonable F-score (95%) of the proposed technique suggest that it can be adapted to detect COVID-19 infected patients.
Collapse
|
14
|
Hireš M, Gazda M, Drotár P, Pah ND, Motin MA, Kumar DK. Convolutional neural network ensemble for Parkinson's disease detection from voice recordings. Comput Biol Med 2021; 141:105021. [PMID: 34799077 DOI: 10.1016/j.compbiomed.2021.105021] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 11/02/2021] [Accepted: 11/03/2021] [Indexed: 11/03/2022]
Abstract
The computerized detection of Parkinson's disease (PD) will facilitate population screening and frequent monitoring and provide a more objective measure of symptoms, benefiting both patients and healthcare providers. Dysarthria is an early symptom of the disease and examining it for computerized diagnosis and monitoring has been proposed. Deep learning-based approaches have advantages for such applications because they do not require manual feature extraction, and while this approach has achieved excellent results in speech recognition, its utilization in the detection of pathological voices is limited. In this work, we present an ensemble of convolutional neural networks (CNNs) for the detection of PD from the voice recordings of 50 healthy people and 50 people with PD obtained from PC-GITA, a publicly available database. We propose a multiple-fine-tuning method to train the base CNN. This approach reduces the semantical gap between the source task that has been used for network pretraining and the target task by expanding the training process by including training on another dataset. Training and testing were performed for each vowel separately, and a 10-fold validation was performed to test the models. The performance was measured by using accuracy, sensitivity, specificity and area under the ROC curve (AUC). The results show that this approach was able to distinguish between the voices of people with PD and those of healthy people for all vowels. While there were small differences between the different vowels, the best performance was when/a/was considered; we achieved 99% accuracy, 86.2% sensitivity, 93.3% specificity and 89.6% AUC. This shows that the method has potential for use in clinical practice for the screening, diagnosis and monitoring of PD, with the advantage that vowel-based voice recordings can be performed online without requiring additional hardware.
Collapse
Affiliation(s)
- Máté Hireš
- Intelligent Information Systems Lab, Technical University of Košice, Letná 9, 42001, Košice, Slovakia
| | - Matej Gazda
- Intelligent Information Systems Lab, Technical University of Košice, Letná 9, 42001, Košice, Slovakia
| | - Peter Drotár
- Intelligent Information Systems Lab, Technical University of Košice, Letná 9, 42001, Košice, Slovakia.
| | | | | | | |
Collapse
|
15
|
Joshi A, Sivaswamy J, Joshi GD. Lung nodule malignancy classification with weakly supervised explanation generation. J Med Imaging (Bellingham) 2021; 8:044502. [PMID: 34423071 DOI: 10.1117/1.jmi.8.4.044502] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 08/05/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Explainable AI aims to build systems that not only give high performance but also are able to provide insights that drive the decision making. However, deriving this explanation is often dependent on fully annotated (class label and local annotation) data, which are not readily available in the medical domain. Approach: This paper addresses the above-mentioned aspects and presents an innovative approach to classifying a lung nodule in a CT volume as malignant or benign, and generating a morphologically meaningful explanation for the decision in the form of attributes such as nodule margin, sphericity, and spiculation. A deep learning architecture that is trained using a multi-phase training regime is proposed. The nodule class label (benign/malignant) is learned with full supervision and is guided by semantic attributes that are learned in a weakly supervised manner. Results: Results of an extensive evaluation of the proposed system on the LIDC-IDRI dataset show good performance compared with state-of-the-art, fully supervised methods. The proposed model is able to label nodules (after full supervision) with an accuracy of 89.1% and an area under curve of 0.91 and to provide eight attributes scores as an explanation, which is learned from a much smaller training set. The proposed system's potential to be integrated with a sub-optimal nodule detection system was also tested, and our system handled 95% of false positive or random regions in the input well by labeling them as benign, which underscores its robustness. Conclusions: The proposed approach offers a way to address computer-aided diagnosis system design under the constraint of sparse availability of fully annotated images.
Collapse
Affiliation(s)
- Aniket Joshi
- International Institute of Information Technology, Hyderabad, India
| | | | | |
Collapse
|
16
|
Jia Z, Luo Y, Wang D, Dinh QN, Lin S, Sharma A, Block EM, Yang M, Gu T, Pearlstein AJ, Yu H, Zhang B. Nondestructive multiplex detection of foodborne pathogens with background microflora and symbiosis using a paper chromogenic array and advanced neural network. Biosens Bioelectron 2021; 183:113209. [PMID: 33836430 DOI: 10.1016/j.bios.2021.113209] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 03/11/2021] [Accepted: 03/28/2021] [Indexed: 01/04/2023]
Abstract
We have developed an inexpensive, standardized paper chromogenic array (PCA) integrated with a machine learning approach to accurately identify single pathogens (Listeria monocytogenes, Salmonella Enteritidis, or Escherichia coli O157:H7) or multiple pathogens (either in multiple monocultures, or in a single cocktail culture), in the presence of background microflora on food. Cantaloupe, a commodity with significant volatile organic compound (VOC) emission and large diverse populations of background microflora, was used as the model food. The PCA was fabricated from a paper microarray via photolithography and paper microfluidics, into which 22 chromogenic dye spots were infused and to which three red/green/blue color-standard dots were taped. When exposed to VOCs emitted by pathogens of interest, dye spots exhibited distinguishable color changes and pattern shifts, which were automatically segmented and digitized into a ΔR/ΔG/ΔB database. We developed an advanced deep feedforward neural network with a learning rate scheduler, L2 regularization, and shortcut connections. After training on the ΔR/ΔG/ΔB database, the network demonstrated excellent performance in identifying pathogens in single monocultures, multiple monocultures, and in cocktail culture, and in distinguishing them from the background signal on cantaloupe, providing accuracy of up to 93% and 91% under ambient and refrigerated conditions, respectively. With its combination of speed, reliability, portability, and low cost, this nondestructive approach holds great potential to significantly advance culture-free pathogen detection and identification on food, and is readily extendable to other food commodities with complex microflora.
Collapse
Affiliation(s)
- Zhen Jia
- Department of Biomedical and Nutritional Sciences, University of Massachusetts, Lowell, 01854, MA, USA
| | - Yaguang Luo
- Environmental Microbial and Food Safety Lab and Food Quality Lab, U.S. Department of Agriculture, Agricultural Research Service, Beltsville, 20705, MD, USA
| | - Dayang Wang
- Department of Electrical and Computer Engineering, University of Massachusetts, Lowell, 01854, MA, USA
| | - Quynh N Dinh
- Department of Biomedical and Nutritional Sciences, University of Massachusetts, Lowell, 01854, MA, USA
| | - Sophia Lin
- Department of Biomedical and Nutritional Sciences, University of Massachusetts, Lowell, 01854, MA, USA
| | - Arnav Sharma
- Department of Physiology and Neurobiology, University of Connecticut, Storrs, 06269, CT, USA
| | - Ethan M Block
- Department of Biomedical and Nutritional Sciences, University of Massachusetts, Lowell, 01854, MA, USA
| | - Manyun Yang
- Department of Biomedical and Nutritional Sciences, University of Massachusetts, Lowell, 01854, MA, USA
| | - Tingting Gu
- Department of Biomedical and Nutritional Sciences, University of Massachusetts, Lowell, 01854, MA, USA
| | - Arne J Pearlstein
- Department of Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, 61801, IL, USA
| | - Hengyong Yu
- Department of Electrical and Computer Engineering, University of Massachusetts, Lowell, 01854, MA, USA
| | - Boce Zhang
- Department of Biomedical and Nutritional Sciences, University of Massachusetts, Lowell, 01854, MA, USA.
| |
Collapse
|
17
|
Kumar N, Gupta M, Gupta D, Tiwari S. Novel deep transfer learning model for COVID-19 patient detection using X-ray chest images. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2021; 14:469-478. [PMID: 34025813 PMCID: PMC8123104 DOI: 10.1007/s12652-021-03306-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 05/01/2021] [Indexed: 06/12/2023]
Abstract
Around the world, more than 250 countries are affected by the COVID-19 pandemic, which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). This outbreak can be controlled only by the diagnosis of the COVID-19 infection in early stages. It is found that the radiographic images are ideal for the fastest diagnosis of COVID-19 infection. This paper proposes an ensemble model which detects the COVID-19 infection in the early stage with the use of chest X-ray images. The transfer learning enables to reuse the pretrained models. The ensemble learning integrates various transfer learning models, i.e., EfficientNet, GoogLeNet, and XceptionNet, to design the proposed model. These models can categorize patients as COVID-19 (+), pneumonia (+), tuberculosis (+), or healthy. The proposed model enhances the classifier's generalization ability for both binary and multiclass COVID-19 datasets. Two popular datasets are used to evaluate the performance of the proposed ensemble model. The comparative analysis validates that the proposed model outperforms the state-of-art models in terms of various performance metrics.
Collapse
Affiliation(s)
- N. Kumar
- Department of Computer Science & Engineering, Maharaja Surajmal Institute of Technology, C-4, Janakpuri, New Delhi, India
| | - M. Gupta
- Department of Computer Science & Engineering, Moradabad Institute of Technology, Moradabad, India
| | - D. Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab India
| | - S. Tiwari
- Department of Computer Science and Engineering, Thapar Institute of Engineering and Technology, Patiala, India
| |
Collapse
|
18
|
Li J, Zhang X, Zhou X. ALBERT-Based Self-Ensemble Model With Semisupervised Learning and Data Augmentation for Clinical Semantic Textual Similarity Calculation: Algorithm Validation Study. JMIR Med Inform 2021; 9:e23086. [PMID: 33480858 PMCID: PMC7864778 DOI: 10.2196/23086] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 11/22/2020] [Accepted: 12/15/2020] [Indexed: 11/22/2022] Open
Abstract
Background In recent years, with increases in the amount of information available and the importance of information screening, increased attention has been paid to the calculation of textual semantic similarity. In the field of medicine, electronic medical records and medical research documents have become important data resources for clinical research. Medical textual semantic similarity calculation has become an urgent problem to be solved. Objective This research aims to solve 2 problems—(1) when the size of medical data sets is small, leading to insufficient learning with understanding of the models and (2) when information is lost in the process of long-distance propagation, causing the models to be unable to grasp key information. Methods This paper combines a text data augmentation method and a self-ensemble ALBERT model under semisupervised learning to perform clinical textual semantic similarity calculations. Results Compared with the methods in the 2019 National Natural Language Processing Clinical Challenges Open Health Natural Language Processing shared task Track on Clinical Semantic Textual Similarity, our method surpasses the best result by 2 percentage points and achieves a Pearson correlation coefficient of 0.92. Conclusions When the size of medical data set is small, data augmentation can increase the size of the data set and improved semisupervised learning can boost the learning efficiency of the model. Additionally, self-ensemble methods improve the model performance. Our method had excellent performance and has great potential to improve related medical problems.
Collapse
Affiliation(s)
- Junyi Li
- School of Information Science and Engineering, Yunnan University, Kunming, China
| | - Xuejie Zhang
- School of Information Science and Engineering, Yunnan University, Kunming, China
| | - Xiaobing Zhou
- School of Information Science and Engineering, Yunnan University, Kunming, China
| |
Collapse
|
19
|
Multiscale CNN with compound fusions for false positive reduction in lung nodule detection. Artif Intell Med 2021; 113:102017. [PMID: 33685584 DOI: 10.1016/j.artmed.2021.102017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2019] [Revised: 07/18/2020] [Accepted: 07/21/2020] [Indexed: 12/20/2022]
Abstract
Pulmonary lung nodules are often benign at the early stage but they could easily become malignant and metastasize to other locations in later stages. Morphological characteristics of these nodule instances vary largely in terms of their size, shape, and texture. There are also other co-existing lung anatomical structures such as lung walls and blood vessels surrounding these nodules resulting in complex contextual information. As a result, their early diagnosis to enable decisive intervention using Computer-Aided Diagnosis (CAD) systems face serious challenges, especially at low false positive rates. In this paper, we propose a new Convolutional Neural Network (CNN) architecture called Multiscale CNN with Compound Fusions (MCNN-CF) for this purpose which uses multiscale 3D patches as inputs and performs a fusion of intermediate features at two different depths of the network in two diverse fashions. The network is trained by a new iterative training procedure adapted to circumvent the class imbalance problem and obtained a Competitive Performance Metric (CPM) score of 0.948 when tested on the LUNA16 dataset. Experimental results illustrate the robustness of the proposed system which has increased the confidence of the prediction probabilities in the detection of the most variety of nodules.
Collapse
|
20
|
Blanc D, Racine V, Khalil A, Deloche M, Broyelle JA, Hammouamri I, Sinitambirivoutin E, Fiammante M, Verdier E, Besson T, Sadate A, Lederlin M, Laurent F, Chassagnon G, Ferretti G, Diascorn Y, Brillet PY, Cassagnes L, Caramella C, Loubet A, Abassebay N, Cuingnet P, Ohana M, Behr J, Ginzac A, Veyssiere H, Durando X, Bousaïd I, Lassau N, Brehant J. Artificial intelligence solution to classify pulmonary nodules on CT. Diagn Interv Imaging 2020; 101:803-810. [PMID: 33168496 DOI: 10.1016/j.diii.2020.10.004] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 10/12/2020] [Accepted: 10/13/2020] [Indexed: 12/20/2022]
Abstract
PURPOSE The purpose of this study was to create an algorithm to detect and classify pulmonary nodules in two categories based on their volume greater than 100 mm3 or not, using machine learning and deep learning techniques. MATERIALS AND METHOD The dataset used to train the model was provided by the organization team of the SFR (French Radiological Society) Data Challenge 2019. An asynchronous and parallel 3-stages pipeline was developed to process all the data (a data "pre-processing" stage; a "nodule detection" stage; a "classifier" stage). Lung segmentation was achieved using 3D U-NET algorithm; nodule detection was done using 3D Retina-UNET and classifier stage with a support vector machine algorithm on selected features. Performances were assessed using area under receiver operating characteristics curve (AUROC). RESULTS The pipeline showed good performance for pathological nodule detection and patient diagnosis. With the preparation dataset, an AUROC of 0.9058 (95% confidence interval [CI]: 0.8746-0.9362) was obtained, 87% yielding accuracy (95% CI: 84.83%-91.03%) for the "nodule detection" stage, corresponding to 86% specificity (95% CI: 82%-92%) and 89% sensitivity (95% CI: 84.83%-91.03%). CONCLUSION A fully functional pipeline using 3D U-NET, 3D Retina-UNET and classifier stage with a support vector machine algorithm was developed, resulting in high capabilities for pulmonary nodule classification.
Collapse
Affiliation(s)
- D Blanc
- QuantaCell, IRMB, Hôpital Saint-Eloi, 34090 Montpellier, France
| | - V Racine
- QuantaCell, IRMB, Hôpital Saint-Eloi, 34090 Montpellier, France
| | - A Khalil
- Department of Radiology, Neuroradiology unit, Assistance Publique-Hôpitaux de Paris, Hôpital Bichat Claude Bernard, 75018 Paris, France; Université de Paris, 75010, Paris, France
| | - M Deloche
- >IBM Cognitive Systems Lab, 34000 Montpellier, France
| | - J-A Broyelle
- >IBM Cognitive Systems Lab, 34000 Montpellier, France
| | - I Hammouamri
- >IBM Cognitive Systems Lab, 34000 Montpellier, France
| | | | - M Fiammante
- IBM Cognitive Systems France, 92270 Bois-Colombes, France
| | - E Verdier
- IBM Cognitive Systems France, 92270 Bois-Colombes, France
| | - T Besson
- IBM Cognitive Systems France, 92270 Bois-Colombes, France
| | - A Sadate
- Department of Radiology and Medical Imaging, CHU Nîmes, University Montpellier, EA2415, 30029 Nîmes, France
| | - M Lederlin
- Department of Radiology, Hôpital Universitaire Pontchaillou, 35000 Rennes, France
| | - F Laurent
- Department of thoracic and cardiovascular Imaging, Respiratory Diseases Service, Respiratory Functional Exploration Service, Hôpital universitaire de Bordeaux, CIC 1401, 33600 Pessac, France
| | - G Chassagnon
- Department of Radiology, Hôpital Cochin, Assistance Publique-Hôpitaux de Paris, 75014, Paris, France & Université de Paris, 75006 Paris, France
| | - G Ferretti
- Department of Radiology and Medical Imaging, CHU Grenoble Alpes, 38700 Grenoble, France
| | - Y Diascorn
- Department of Radiology, Hôpital Universitaire Pasteur, Nice, France
| | - P-Y Brillet
- Inserm UMR 1272, Université Sorbonne Paris Nord, Assistance Publique-Hôpitaux de Paris, Department of Radiology, Hôpital Avicenne, 93430 Bobigny, France
| | - Lucie Cassagnes
- Department of radiology B, CHU Gabriel Montpied, 63003 Clermont-Ferrand, France
| | - C Caramella
- Department of Radiology, Institut Gustave Roussy, 94800 Villejuif, France
| | - A Loubet
- Department of Neuroradiology, Hôpital Gui-de-Chauliac, CHRU de Montpellier, 34000 Montpellier, France
| | - N Abassebay
- Department of Radiology, CH Douai, 59507 Douai, France
| | - P Cuingnet
- Department of Radiology, CH Douai, 59507 Douai, France
| | - M Ohana
- Department of Radiology, Nouvel Hôpital Civil, 67000 Strasbourg, France
| | - J Behr
- Department of Radiology, CHRU de Jean-Minjoz Besançon, 25030 Besançon, France
| | - A Ginzac
- Clinical Research Unit, Clinical Research and Innovation Delegation, Centre de Lutte contre le Cancer, Centre Jean Perrin, 63011 Clermont-Ferrand Cedex 1, France; Université Clermont Auvergne,INSERM, U1240 Imagerie Moléculaire et Stratégies Théranostiques, Centre Jean Perrin, 63011 Clermont-Ferrand, France; Clinical Investigation Center, UMR501, 63011 Clermont-Ferrand, France
| | - H Veyssiere
- Clinical Research Unit, Clinical Research and Innovation Delegation, Centre de Lutte contre le Cancer, Centre Jean Perrin, 63011 Clermont-Ferrand Cedex 1, France; Université Clermont Auvergne,INSERM, U1240 Imagerie Moléculaire et Stratégies Théranostiques, Centre Jean Perrin, 63011 Clermont-Ferrand, France; Clinical Investigation Center, UMR501, 63011 Clermont-Ferrand, France
| | - X Durando
- Clinical Research Unit, Clinical Research and Innovation Delegation, Centre de Lutte contre le Cancer, Centre Jean Perrin, 63011 Clermont-Ferrand Cedex 1, France; Université Clermont Auvergne,INSERM, U1240 Imagerie Moléculaire et Stratégies Théranostiques, Centre Jean Perrin, 63011 Clermont-Ferrand, France; Clinical Investigation Center, UMR501, 63011 Clermont-Ferrand, France; Department of Medical Oncology, Centre Jean Perrin, 63011 Clermont-Ferrand, France
| | - I Bousaïd
- Digital Transformation and Information Systems Division, Gustave Roussy, 94800 Villejuif, France
| | - N Lassau
- Multimodal Biomedical Imaging Laboratory Paris-Saclay, BIOMAPS, UMR 1281, Université Paris-Saclay, Inserm, CNRS, CEA, Department of Radiology, Institut Gustave Roussy, 94800, Villejuif, France
| | - J Brehant
- Department of Radiology, Centre Jean Perrin, 63011 Clermont-Ferrand, France.
| |
Collapse
|
21
|
Yang J, Chen Z, Liu W, Wang X, Ma S, Jin F, Wang X. Development of a Malignancy Potential Binary Prediction Model Based on Deep Learning for the Mitotic Count of Local Primary Gastrointestinal Stromal Tumors. Korean J Radiol 2020; 22:344-353. [PMID: 33169545 PMCID: PMC7909867 DOI: 10.3348/kjr.2019.0851] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 05/29/2020] [Accepted: 06/15/2020] [Indexed: 11/24/2022] Open
Abstract
Objective The mitotic count of gastrointestinal stromal tumors (GIST) is closely associated with
the risk of planting and metastasis. The purpose of this study was to develop a
predictive model for the mitotic index of local primary GIST, based on deep learning
algorithm. Materials and Methods Abdominal contrast-enhanced CT images of 148 pathologically confirmed GIST cases were
retrospectively collected for the development of a deep learning classification
algorithm. The areas of GIST masses on the CT images were retrospectively labelled by an
experienced radiologist. The postoperative pathological mitotic count was considered as
the gold standard (high mitotic count, > 5/50 high-power fields [HPFs]; low
mitotic count, ≤ 5/50 HPFs). A binary classification model was trained on the
basis of the VGG16 convolutional neural network, using the CT images with the training
set (n = 108), validation set (n = 20), and the test set (n = 20). The sensitivity,
specificity, positive predictive value (PPV), and negative predictive value (NPV) were
calculated at both, the image level and the patient level. The receiver operating
characteristic curves were generated on the basis of the model prediction results and
the area under curves (AUCs) were calculated. The risk categories of the tumors were
predicted according to the Armed Forces Institute of Pathology criteria. Results At the image level, the classification prediction results of the mitotic counts in the
test cohort were as follows: sensitivity 85.7% (95% confidence interval [CI]:
0.834–0.877), specificity 67.5% (95% CI: 0.636–0.712), PPV 82.1% (95% CI:
0.797–0.843), NPV 73.0% (95% CI: 0.691–0.766), and AUC 0.771 (95% CI:
0.750–0.791). At the patient level, the classification prediction results in the
test cohort were as follows: sensitivity 90.0% (95% CI: 0.541–0.995), specificity
70.0% (95% CI: 0.354–0.919), PPV 75.0% (95% CI: 0.428–0.933), NPV 87.5%
(95% CI: 0.467–0.993), and AUC 0.800 (95% CI: 0.563–0.943). Conclusion We developed and preliminarily verified the GIST mitotic count binary prediction model,
based on the VGG convolutional neural network. The model displayed a good predictive
performance.
Collapse
Affiliation(s)
- Jiejin Yang
- Department of Radiology, Peking University First Hospital, Peking University, Beijing, China
| | - Zeyang Chen
- Department of General Surgery, Peking University First Hospital, Peking University, Beijing, China
| | - Weipeng Liu
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Xiangpeng Wang
- Beijing Smart Tree Medical Technology Co. Ltd., Beijing, China
| | - Shuai Ma
- Department of Radiology, Peking University First Hospital, Peking University, Beijing, China
| | - Feifei Jin
- Department of Biostatistics, Peking University First Hospital, Beijing, China
| | - Xiaoying Wang
- Department of Radiology, Peking University First Hospital, Peking University, Beijing, China.
| |
Collapse
|
22
|
Dif N, Elberrichi Z. Efficient Regularization Framework for Histopathological Image Classification Using Convolutional Neural Networks. INTERNATIONAL JOURNAL OF COGNITIVE INFORMATICS AND NATURAL INTELLIGENCE 2020. [DOI: 10.4018/ijcini.2020100104] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Deep learning methods are characterized by their capacity to learn data representation compared to the traditional machine learning algorithms. However, these methods are prone to overfitting on small volumes of data. The objective of this research is to overcome this limitation by improving the generalization in the proposed deep learning framework based on various techniques: data augmentation, small models, optimizer selection, and ensemble learning. For ensembling, the authors used selected models from different checkpoints and both voting and unweighted average methods for combination. The experimental study on the lymphomas histopathological dataset highlights the efficiency of the MobileNet2 network combined with the stochastic gradient descent (SGD) optimizer in terms of generalization. The best results have been achieved by the combination of the best three checkpoint models (98.67% of accuracy). These findings provide important insights into the efficiency of the checkpoint ensemble learning method for histopathological image classification.
Collapse
Affiliation(s)
- Nassima Dif
- EEDIS Laboratory ,Djillali Liabes University, Sidi Bel Abbes, Algeria
| | | |
Collapse
|
23
|
Yu J, Yang B, Wang J, Leader J, Wilson D, Pu J. 2D CNN versus 3D CNN for false-positive reduction in lung cancer screening. J Med Imaging (Bellingham) 2020; 7:051202. [PMID: 33062802 PMCID: PMC7550796 DOI: 10.1117/1.jmi.7.5.051202] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 09/28/2020] [Indexed: 11/14/2022] Open
Abstract
Purpose: To clarify whether and to what extent three-dimensional (3D) convolutional neural network (CNN) is superior to 2D CNN when applied to reduce false-positive nodule detections in the scenario of low-dose computed tomography (CT) lung cancer screening. Approach: We established a dataset consisting of 1600 chest CT examinations acquired on different subjects from various sources. There were in total 18,280 candidate nodules in these CT examinations, among which 9185 were nodules and 9095 were not nodules. For each candidate nodule, we extracted a number of cubic subvolumes with a dimension of 72 × 72 × 72 mm 3 by rotating the CT examinations randomly for 25 times prior to the extraction of the axis-aligned subvolumes. These subvolumes were split into three groups in a ratio of 8 ∶ 1 ∶ 1 for training, validation, and independent testing purposes. We developed a multiscale CNN architecture and implemented its 2D and 3D versions to classify pulmonary nodules into two categories, namely true positive and false positive. The performance of the 2D/3D-CNN classification schemes was evaluated using the area under the receiver operating characteristic curves (AUC). The p -values and the 95% confidence intervals (CI) were calculated. Results: The AUC for the optimal 2D-CNN model is 0.9307 (95% CI: 0.9285 to 0.9330) with a sensitivity of 92.70% and a specificity of 76.21%. The 3D-CNN model with the best performance had an AUC of 0.9541 (95% CI: 0.9495 to 0.9583) with a sensitivity of 89.98% and a specificity of 87.30%. The developed multiscale CNN architecture had a better performance than the vanilla architecture did. Conclusions: The 3D-CNN model has a better performance in false-positive reduction compared with its 2D counterpart; however, the improvement is relatively limited and demands more computational resources for training purposes.
Collapse
Affiliation(s)
- Juezhao Yu
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Bohan Yang
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Jing Wang
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - Joseph Leader
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| | - David Wilson
- University of Pittsburgh, Department of Medicine, Pittsburgh, Pennsylvania, United States
| | - Jiantao Pu
- University of Pittsburgh, Departments of Radiology and Bioengineering, Pittsburgh, Pennsylvania, United States
| |
Collapse
|
24
|
Sathyakumar K, Munoz M, Singh J, Hussain N, Babu BA. Automated Lung Cancer Detection Using Artificial Intelligence (AI) Deep Convolutional Neural Networks: A Narrative Literature Review. Cureus 2020; 12:e10017. [PMID: 32989411 PMCID: PMC7518939 DOI: 10.7759/cureus.10017] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Lung cancer is the number one cause of cancer-related deaths in the United States as well as worldwide. Radiologists and physicians experience heavy daily workloads, thus are at high risk for burn-out. To alleviate this burden, this narrative literature review compares the performance of four different artificial intelligence (AI) models in lung nodule cancer detection, as well as their performance to physicians/radiologists reading accuracy. A total of 648 articles were selected by two experienced physicians with over 10 years of experience in the fields of pulmonary critical care, and hospital medicine. The data bases used to search and select the articles are PubMed/MEDLINE, EMBASE, Cochrane library, Google Scholar, Web of science, IEEEXplore, and DBLP. The articles selected range from the years between 2008 and 2019. Four out of 648 articles were selected using the following inclusion criteria: 1) 18-65 years old, 2) CT chest scans, 2) lung nodule, 3) lung cancer, 3) deep learning, 4) ensemble and 5) classic methods. The exclusion criteria used in this narrative review include: 1) age greater than 65 years old, 2) positron emission tomography (PET) hybrid scans, 3) chest X-ray (CXR) and 4) genomics. The model performance outcomes metrics are measured and evaluated in sensitivity, specificity, accuracy, receiver operator characteristic (ROC) curve, and the area under the curve (AUC). This hybrid deep-learning model is a state-of-the-art architecture, with high-performance accuracy and low false-positive results. Future studies, comparing each model accuracy at depth is key. Automated physician-assist systems as this model in this review article help preserve a quality doctor-patient relationship.
Collapse
Affiliation(s)
- Kaviya Sathyakumar
- Family Medicine, University of Florida College of Medicine, Gainesville, USA
| | - Michael Munoz
- Pediatrics, Monmouth Medical Center, Long Branch, USA
| | - Jaikaran Singh
- Internal Medicine, Saint John Regional Hospital, Saint John, CAN
| | - Nowair Hussain
- Internal Medicine, Ross University School of Medicine, Bridgetown, BRB
| | | |
Collapse
|
25
|
Hashmi MF, Katiyar S, Keskar AG, Bokde ND, Geem ZW. Efficient Pneumonia Detection in Chest Xray Images Using Deep Transfer Learning. Diagnostics (Basel) 2020; 10:E417. [PMID: 32575475 PMCID: PMC7345724 DOI: 10.3390/diagnostics10060417] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 06/13/2020] [Accepted: 06/16/2020] [Indexed: 12/27/2022] Open
Abstract
Pneumonia causes the death of around 700,000 children every year and affects 7% of the global population. Chest X-rays are primarily used for the diagnosis of this disease. However, even for a trained radiologist, it is a challenging task to examine chest X-rays. There is a need to improve the diagnosis accuracy. In this work, an efficient model for the detection of pneumonia trained on digital chest X-ray images is proposed, which could aid the radiologists in their decision making process. A novel approach based on a weighted classifier is introduced, which combines the weighted predictions from the state-of-the-art deep learning models such as ResNet18, Xception, InceptionV3, DenseNet121, and MobileNetV3 in an optimal way. This approach is a supervised learning approach in which the network predicts the result based on the quality of the dataset used. Transfer learning is used to fine-tune the deep learning models to obtain higher training and validation accuracy. Partial data augmentation techniques are employed to increase the training dataset in a balanced way. The proposed weighted classifier is able to outperform all the individual models. Finally, the model is evaluated, not only in terms of test accuracy, but also in the AUC score. The final proposed weighted classifier model is able to achieve a test accuracy of 98.43% and an AUC score of 99.76 on the unseen data from the Guangzhou Women and Children's Medical Center pneumonia dataset. Hence, the proposed model can be used for a quick diagnosis of pneumonia and can aid the radiologists in the diagnosis process.
Collapse
Affiliation(s)
- Mohammad Farukh Hashmi
- Department of Electronics and Communication Engineering, National Institute of Technology, Warangal 506004, India;
| | - Satyarth Katiyar
- Department of Electronics and Communication Engineering, Harcourt Butler Technical University, Kanpur 208002, India;
| | - Avinash G Keskar
- Department of Electronics and Communication Engineering, Visvesvaraya National Institute of Technology, Nagpur 440010, India;
| | - Neeraj Dhanraj Bokde
- Department of Engineering-Renewable Energy and Thermodynamics, Aarhus University, 8000 Aarhus, Denmark;
| | - Zong Woo Geem
- Department of Energy IT, Gachon University, Seongnam 13120, Korea
| |
Collapse
|
26
|
Wang Y, Zhou L, Wang M, Shao C, Shi L, Yang S, Zhang Z, Feng M, Shan F, Liu L. Combination of generative adversarial network and convolutional neural network for automatic subcentimeter pulmonary adenocarcinoma classification. Quant Imaging Med Surg 2020; 10:1249-1264. [PMID: 32550134 DOI: 10.21037/qims-19-982] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Background The efficient and accurate diagnosis of pulmonary adenocarcinoma before surgery is of considerable significance to clinicians. Although computed tomography (CT) examinations are widely used in practice, it is still challenging and time-consuming for radiologists to distinguish between different types of subcentimeter pulmonary nodules. Although there have been many deep learning algorithms proposed, their performance largely depends on vast amounts of data, which is difficult to collect in the medical imaging area. Therefore, we propose an automatic classification system for subcentimeter pulmonary adenocarcinoma, combining a convolutional neural network (CNN) and a generative adversarial network (GAN) to optimize clinical decision-making and to provide small dataset algorithm design ideas. Methods A total of 206 nodules with postoperative pathological labels were analyzed. Among them were 30 adenocarcinomas in situ (AISs), 119 minimally invasive adenocarcinomas (MIAs), and 57 invasive adenocarcinomas (IACs). Our system consisted of two parts, a GAN-based image synthesis, and a CNN classification. First, several popular existing GAN techniques were employed to augment the datasets, and comprehensive experiments were conducted to evaluate the quality of the GAN synthesis. Additionally, our classification system processes were based on two-dimensional (2D) nodule-centered CT patches without the need of manual labeling information. Results For GAN-based image synthesis, the visual Turing test showed that even radiologists could not tell the GAN-synthesized from the raw images (accuracy: primary radiologist 56%, senior radiologist 65%). For CNN classification, our progressive growing wGAN improved the performance of CNN most effectively (area under the curve =0.83). The experiments indicated that the proposed GAN augmentation method improved the classification accuracy by 23.5% (from 37.0% to 60.5%) and 7.3% (from 53.2% to 60.5%) in comparison with training methods using raw and common augmented images respectively. The performance of this combined GAN and CNN method (accuracy: 60.5%±2.6%) was comparable to the state-of-the-art methods, and our CNN was also more lightweight. Conclusions The experiments revealed that GAN synthesis techniques could effectively alleviate the problem of insufficient data in medical imaging. The proposed GAN plus CNN framework can be generalized for use in building other computer-aided detection (CADx) algorithms and thus assist in diagnosis.
Collapse
Affiliation(s)
- Yunpeng Wang
- Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Lingxiao Zhou
- Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China.,Department of Respiratory Medicine, Zhongshan-Xuhui Hospital, Fudan University, Shanghai, China
| | - Mingming Wang
- School of Computer Science, Fudan University, Shanghai, China
| | - Cheng Shao
- School of Computer Science, Fudan University, Shanghai, China
| | - Lili Shi
- Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Shuyi Yang
- Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Zhiyong Zhang
- Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Mingxiang Feng
- Chest Surgery Department, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Fei Shan
- Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China
| | - Lei Liu
- Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China.,Shanghai University of Medicine & Health Sciences, Shanghai China
| |
Collapse
|
27
|
Revathi M, Jeya IJS, Deepa SN. Deep learning-based soft computing model for image classification application. Soft comput 2020. [DOI: 10.1007/s00500-020-05048-7] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
28
|
Transfer Learning with Deep Convolutional Neural Network (CNN) for Pneumonia Detection Using Chest X-ray. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10093233] [Citation(s) in RCA: 115] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
Pneumonia is a life-threatening disease, which occurs in the lungs caused by either bacterial or viral infection. It can be life-endangering if not acted upon at the right time and thus the early diagnosis of pneumonia is vital. The paper aims to automatically detect bacterial and viral pneumonia using digital x-ray images. It provides a detailed report on advances in accurate detection of pneumonia and then presents the methodology adopted by the authors. Four different pre-trained deep Convolutional Neural Network (CNN): AlexNet, ResNet18, DenseNet201, and SqueezeNet were used for transfer learning. A total of 5247 chest X-ray images consisting of bacterial, viral, and normal chest x-rays images were preprocessed and trained for the transfer learning-based classification task. In this study, the authors have reported three schemes of classifications: normal vs. pneumonia, bacterial vs. viral pneumonia, and normal, bacterial, and viral pneumonia. The classification accuracy of normal and pneumonia images, bacterial and viral pneumonia images, and normal, bacterial, and viral pneumonia were 98%, 95%, and 93.3%, respectively. This is the highest accuracy, in any scheme, of the accuracies reported in the literature. Therefore, the proposed study can be useful in more quickly diagnosing pneumonia by the radiologist and can help in the fast airport screening of pneumonia patients.
Collapse
|
29
|
A Two-Stage Framework for Automated Malignant Pulmonary Nodule Detection in CT Scans. Diagnostics (Basel) 2020; 10:diagnostics10030131. [PMID: 32121281 PMCID: PMC7151085 DOI: 10.3390/diagnostics10030131] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2020] [Revised: 02/13/2020] [Accepted: 02/18/2020] [Indexed: 11/17/2022] Open
Abstract
This research is concerned with malignant pulmonary nodule detection (PND) in low-dose CT scans. Due to its crucial role in the early diagnosis of lung cancer, PND has considerable potential in improving the survival rate of patients. We propose a two-stage framework that exploits the ever-growing advances in deep neural network models, and that is comprised of a semantic segmentation stage followed by localization and classification. We employ the recently published DeepLab model for semantic segmentation, and we show that it significantly improves the accuracy of nodule detection compared to the classical U-Net model and its most recent variants. Using the widely adopted Lung Nodule Analysis dataset (LUNA16), we evaluate the performance of the semantic segmentation stage by adopting two network backbones, namely, MobileNet-V2 and Xception. We present the impact of various model training parameters and the computational time on the detection accuracy, featuring a 79.1% mean intersection-over-union (mIoU) and an 88.34% dice coefficient. This represents a mIoU increase of 60% and a dice coefficient increase of 30% compared to U-Net. The second stage involves feeding the output of the DeepLab-based semantic segmentation to a localization-then-classification stage. The second stage is realized using Faster RCNN and SSD, with an Inception-V2 as a backbone. On LUNA16, the two-stage framework attained a sensitivity of 96.4%, outperforming other recent models in the literature, including deep models. Finally, we show that adopting a transfer learning approach, particularly, the DeepLab model weights of the first stage of the framework, to infer binary (malignant-benign) labels on the Kaggle dataset for pulmonary nodules achieves a classification accuracy of 95.66%, which represents approximately 4% improvement over the recent literature.
Collapse
|
30
|
A Novel Transfer Learning Based Approach for Pneumonia Detection in Chest X-ray Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10020559] [Citation(s) in RCA: 192] [Impact Index Per Article: 38.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Pneumonia is among the top diseases which cause most of the deaths all over the world. Virus, bacteria and fungi can all cause pneumonia. However, it is difficult to judge the pneumonia just by looking at chest X-rays. The aim of this study is to simplify the pneumonia detection process for experts as well as for novices. We suggest a novel deep learning framework for the detection of pneumonia using the concept of transfer learning. In this approach, features from images are extracted using different neural network models pretrained on ImageNet, which then are fed into a classifier for prediction. We prepared five different models and analyzed their performance. Thereafter, we proposed an ensemble model that combines outputs from all pretrained models, which outperformed individual models, reaching the state-of-the-art performance in pneumonia recognition. Our ensemble model reached an accuracy of 96.4% with a recall of 99.62% on unseen data from the Guangzhou Women and Children’s Medical Center dataset.
Collapse
|
31
|
Kim TJ, Kim CH, Lee HY, Chung MJ, Shin SH, Lee KJ, Lee KS. Management of incidental pulmonary nodules: current strategies and future perspectives. Expert Rev Respir Med 2019; 14:173-194. [PMID: 31762330 DOI: 10.1080/17476348.2020.1697853] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Introduction: Detection and characterization of pulmonary nodules is an important issue, because the process is the first step in the management of lung cancers.Areas covered: Literature review was performed on May 15 2019 by using the PubMed, US National Library of Medicine National Institutes of Health, and the National Center for Biotechnology information. CT features helping identify the druggable mutations and predict the prognosis of malignant nodules were presented. Technical advancements in MRI and PET/CT were introduced for providing functional information about malignant nodules. Advances in various tissue biopsy techniques enabling molecular analysis and histologic diagnosis of indeterminate nodules were also presented. New techniques such as radiomics, deep learning (DL) technology, and artificial intelligence showing promise in differentiating between malignant and benign nodules were summarized. Recently, updated management guidelines for solid and subsolid nodules incidentally detected on CT were described. Risk stratification and prediction models for indeterminate nodules under active investigation were briefly summarized.Expert opinion: Advancement in CT knowledge has led to a better correlation between CT features and genomic alterations or tumor histology. Recent advances like PET/CT, MRI, radiomics, and DL-based approach have shown promising results in the characterization and prognostication of pulmonary nodules.
Collapse
Affiliation(s)
- Tae Jung Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Cho Hee Kim
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Ho Yun Lee
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Myung Jin Chung
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Sun Hye Shin
- Respiratory and Critical Care Division of Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Kyung Jong Lee
- Respiratory and Critical Care Division of Department of Internal Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| | - Kyung Soo Lee
- Department of Radiology, Samsung Medical Center, Sungkyunkwan University School of Medicine (SKKU-SOM), Seoul, South Korea
| |
Collapse
|
32
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
33
|
[Basis and perspectives of artificial intelligence in radiation therapy]. Cancer Radiother 2019; 23:913-916. [PMID: 31645301 DOI: 10.1016/j.canrad.2019.08.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2019] [Revised: 08/15/2019] [Accepted: 08/20/2019] [Indexed: 11/23/2022]
Abstract
Artificial intelligence is a highly polysemic term. In computer science, with the objective of being able to solve totally new problems in new contexts, artificial intelligence includes connectionism (neural networks) for learning and logics for reasoning. Artificial intelligence algorithms mimic tasks normally requiring human intelligence, like deduction, induction, and abduction. All apply to radiation oncology. Combined with radiomics, neural networks have obtained good results in image classification, natural language processing, phenotyping based on electronic health records, and adaptive radiation therapy. General adversial networks have been tested to generate synthetic data. Logics based systems have been developed for providing formal domain ontologies, supporting clinical decision and checking consistency of the systems. Artificial intelligence must integrate both deep learning and logic approaches to perform complex tasks and go beyond the so-called narrow artificial intelligence that is tailored to perform some highly specialized task. Combined together with mechanistic models, artificial intelligence has the potential to provide new tools such as digital twins for precision oncology.
Collapse
|
34
|
Zhang G, Yang Z, Gong L, Jiang S, Wang L, Cao X, Wei L, Zhang H, Liu Z. An Appraisal of Nodule Diagnosis for Lung Cancer in CT Images. J Med Syst 2019; 43:181. [PMID: 31093830 DOI: 10.1007/s10916-019-1327-0] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2019] [Accepted: 05/08/2019] [Indexed: 12/17/2022]
Abstract
As "the second eyes" of radiologists, computer-aided diagnosis systems play a significant role in nodule detection and diagnosis for lung cancer. In this paper, we aim to provide a systematic survey of state-of-the-art techniques (both traditional techniques and deep learning techniques) for nodule diagnosis from computed tomography images. This review first introduces the current progress and the popular structure used for nodule diagnosis. In particular, we provide a detailed overview of the five major stages in the computer-aided diagnosis systems: data acquisition, nodule segmentation, feature extraction, feature selection and nodule classification. Second, we provide a detailed report of the selected works and make a comprehensive comparison between selected works. The selected papers are from the IEEE Xplore, Science Direct, PubMed, and Web of Science databases up to December 2018. Third, we discuss and summarize the better techniques used in nodule diagnosis and indicate the existing future challenges in this field, such as improving the area under the receiver operating characteristic curve and accuracy, developing new deep learning-based diagnosis techniques, building efficient feature sets (fusing traditional features and deep features), developing high-quality labeled databases with malignant and benign nodules and promoting the cooperation between medical organizations and academic institutions.
Collapse
Affiliation(s)
- Guobin Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Zhiyong Yang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Li Gong
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Shan Jiang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China. .,Centre for advanced Mechanisms and Robotics, Tianjin University, 135 Yaguan Road, Jinnan District, Tianjin, 300350, China.
| | - Lu Wang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Xi Cao
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Lin Wei
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Hongyun Zhang
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| | - Ziqi Liu
- School of Mechanical Engineering, Tianjin University, Tianjin, 300350, China
| |
Collapse
|