51
|
Rahman H, Khan AR, Sadiq T, Farooqi AH, Khan IU, Lim WH. A Systematic Literature Review of 3D Deep Learning Techniques in Computed Tomography Reconstruction. Tomography 2023; 9:2158-2189. [PMID: 38133073 PMCID: PMC10748093 DOI: 10.3390/tomography9060169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 11/27/2023] [Accepted: 12/01/2023] [Indexed: 12/23/2023] Open
Abstract
Computed tomography (CT) is used in a wide range of medical imaging diagnoses. However, the reconstruction of CT images from raw projection data is inherently complex and is subject to artifacts and noise, which compromises image quality and accuracy. In order to address these challenges, deep learning developments have the potential to improve the reconstruction of computed tomography images. In this regard, our research aim is to determine the techniques that are used for 3D deep learning in CT reconstruction and to identify the training and validation datasets that are accessible. This research was performed on five databases. After a careful assessment of each record based on the objective and scope of the study, we selected 60 research articles for this review. This systematic literature review revealed that convolutional neural networks (CNNs), 3D convolutional neural networks (3D CNNs), and deep learning reconstruction (DLR) were the most suitable deep learning algorithms for CT reconstruction. Additionally, two major datasets appropriate for training and developing deep learning systems were identified: 2016 NIH-AAPM-Mayo and MSCT. These datasets are important resources for the creation and assessment of CT reconstruction models. According to the results, 3D deep learning may increase the effectiveness of CT image reconstruction, boost image quality, and lower radiation exposure. By using these deep learning approaches, CT image reconstruction may be made more precise and effective, improving patient outcomes, diagnostic accuracy, and healthcare system productivity.
Collapse
Affiliation(s)
- Hameedur Rahman
- Department of Computer Games Development, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Abdur Rehman Khan
- Department of Creative Technologies, Faculty of Computing & AI, Air University, E9, Islamabad 44000, Pakistan;
| | - Touseef Sadiq
- Centre for Artificial Intelligence Research, Department of Information and Communication Technology, University of Agder, Jon Lilletuns vei 9, 4879 Grimstad, Norway
| | - Ashfaq Hussain Farooqi
- Department of Computer Science, Faculty of Computing AI, Air University, Islamabad 44000, Pakistan;
| | - Inam Ullah Khan
- Department of Electronic Engineering, School of Engineering & Applied Sciences (SEAS), Isra University, Islamabad Campus, Islamabad 44000, Pakistan;
| | - Wei Hong Lim
- Faculty of Engineering, Technology and Built Environment, UCSI University, Kuala Lumpur 56000, Malaysia;
| |
Collapse
|
52
|
Zhang X, Liu B, Liu K, Wang L. The diagnosis performance of convolutional neural network in the detection of pulmonary nodules: a systematic review and meta-analysis. Acta Radiol 2023; 64:2987-2998. [PMID: 37743663 DOI: 10.1177/02841851231201514] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
BACKGROUND Pulmonary nodules are an early imaging indication of lung cancer, and early detection of pulmonary nodules can improve the prognosis of lung cancer. As one of the applications of machine learning, the convolutional neural network (CNN) applied to computed tomography (CT) imaging data improves the accuracy of diagnosis, but the results could be more consistent. PURPOSE To evaluate the diagnostic performance of CNN in assisting in detecting pulmonary nodules in CT images. MATERIAL AND METHODS PubMed, Cochrane Library, Web of Science, Elsevier, CNKI and Wanfang databases were systematically retrieved before 30 April 2023. Two reviewers searched and checked the full text of articles that might meet the criteria. The reference criteria are joint diagnoses by experienced physicians. The pooled sensitivity, specificity and the area under the summary receiver operating characteristic curve (AUC) were calculated by a random-effects model. Meta-regression analysis was performed to explore potential sources of heterogeneity. RESULTS Twenty-six studies were included in this meta-analysis, involving 2,391,702 regions of interest, comprising segmented images with a few wide pixels. The combined sensitivity and specificity values of the CNN model in detecting pulmonary nodules were 0.93 and 0.95, respectively. The pooled diagnostic odds ratio was 291. The AUC was 0.98. There was heterogeneity in sensitivity and specificity among the studies. The results suggested that data sources, pretreatment methods, reconstruction slice thickness, population source and locality might contribute to the heterogeneity of these eligible studies. CONCLUSION The CNN model can be a valuable diagnostic tool with high accuracy in detecting pulmonary nodules.
Collapse
Affiliation(s)
- Xinyue Zhang
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, Department of Epidemiology & Biostatistics, School of Public Health, Southeast University, Nanjing, China
| | - Bo Liu
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, Department of Epidemiology & Biostatistics, School of Public Health, Southeast University, Nanjing, China
| | - Kefu Liu
- Department of radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou Municipal Hospital, Gusu School, Nanjing Medical University, Suzhou, China
| | - Lina Wang
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, Department of Epidemiology & Biostatistics, School of Public Health, Southeast University, Nanjing, China
| |
Collapse
|
53
|
Abdollahifard S, Farrokhi A, Kheshti F, Jalali M, Mowla A. Application of convolutional network models in detection of intracranial aneurysms: A systematic review and meta-analysis. Interv Neuroradiol 2023; 29:738-747. [PMID: 35549574 PMCID: PMC10680951 DOI: 10.1177/15910199221097475] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 04/11/2022] [Indexed: 11/15/2022] Open
Abstract
INTRODUCTION Intracranial aneurysms have a high prevalence in human population. It also has a heavy burden of disease and high mortality rate in the case of rupture. Convolutional neural network(CNN) is a type of deep learning architecture which has been proven powerful to detect intracranial aneurysms. METHODS Four databases were searched using artificial intelligence, intracranial aneurysms, and synonyms to find eligible studies. Articles which had applied CNN for detection of intracranial aneurisms were included in this review. Sensitivity and specificity of the models and human readers regarding modality, size, and location of aneurysms were sought to be extracted. Random model was the preferred model for analyses using CMA 2 to determine pooled sensitivity and specificity. RESULTS Overall, 20 studies were used in this review. Deep learning models could detect intracranial aneurysms with a sensitivity of 90/6% (CI: 87/2-93/2%) and specificity of 94/6% (CI: 0/914-0/966). CTA was the most sensitive modality (92.0%(CI:85/2-95/8%)). Overall sensitivity of the models for aneurysms more than 3 mm was above 98% (98%-100%) and 74.6 for aneurysms less than 3 mm. With the aid of AI, the clinicians' sensitivity increased to 12/8% and interrater agreement to 0/193. CONCLUSION CNN models had an acceptable sensitivity for detection of intracranial aneurysms, surpassing human readers in some fields. The logical approach for application of deep learning models would be its use as a highly capable assistant. In essence, deep learning models are a groundbreaking technology that can assist clinicians and allow them to diagnose intracranial aneurysms more accurately.
Collapse
Affiliation(s)
- Saeed Abdollahifard
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Amirmohammad Farrokhi
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Fatemeh Kheshti
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Mahtab Jalali
- Research center for neuromodulation and pain, Shiraz, Iran
- Student research committee, Shiraz University of Medical Sciences, Shiraz, Iran
| | - Ashkan Mowla
- Division of Stroke and Endovascular Neurosurgery, Department of Neurological Surgery, Keck School of Medicine, University of Southern California (USC), Los Angeles, CA, USA
| |
Collapse
|
54
|
Gandhi Z, Gurram P, Amgai B, Lekkala SP, Lokhandwala A, Manne S, Mohammed A, Koshiya H, Dewaswala N, Desai R, Bhopalwala H, Ganti S, Surani S. Artificial Intelligence and Lung Cancer: Impact on Improving Patient Outcomes. Cancers (Basel) 2023; 15:5236. [PMID: 37958411 PMCID: PMC10650618 DOI: 10.3390/cancers15215236] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2023] [Revised: 10/23/2023] [Accepted: 10/24/2023] [Indexed: 11/15/2023] Open
Abstract
Lung cancer remains one of the leading causes of cancer-related deaths worldwide, emphasizing the need for improved diagnostic and treatment approaches. In recent years, the emergence of artificial intelligence (AI) has sparked considerable interest in its potential role in lung cancer. This review aims to provide an overview of the current state of AI applications in lung cancer screening, diagnosis, and treatment. AI algorithms like machine learning, deep learning, and radiomics have shown remarkable capabilities in the detection and characterization of lung nodules, thereby aiding in accurate lung cancer screening and diagnosis. These systems can analyze various imaging modalities, such as low-dose CT scans, PET-CT imaging, and even chest radiographs, accurately identifying suspicious nodules and facilitating timely intervention. AI models have exhibited promise in utilizing biomarkers and tumor markers as supplementary screening tools, effectively enhancing the specificity and accuracy of early detection. These models can accurately distinguish between benign and malignant lung nodules, assisting radiologists in making more accurate and informed diagnostic decisions. Additionally, AI algorithms hold the potential to integrate multiple imaging modalities and clinical data, providing a more comprehensive diagnostic assessment. By utilizing high-quality data, including patient demographics, clinical history, and genetic profiles, AI models can predict treatment responses and guide the selection of optimal therapies. Notably, these models have shown considerable success in predicting the likelihood of response and recurrence following targeted therapies and optimizing radiation therapy for lung cancer patients. Implementing these AI tools in clinical practice can aid in the early diagnosis and timely management of lung cancer and potentially improve outcomes, including the mortality and morbidity of the patients.
Collapse
Affiliation(s)
- Zainab Gandhi
- Department of Internal Medicine, Geisinger Wyoming Valley Medical Center, Wilkes Barre, PA 18711, USA
| | - Priyatham Gurram
- Department of Medicine, Mamata Medical College, Khammam 507002, India; (P.G.); (S.P.L.); (S.M.)
| | - Birendra Amgai
- Department of Internal Medicine, Geisinger Community Medical Center, Scranton, PA 18510, USA;
| | - Sai Prasanna Lekkala
- Department of Medicine, Mamata Medical College, Khammam 507002, India; (P.G.); (S.P.L.); (S.M.)
| | - Alifya Lokhandwala
- Department of Medicine, Jawaharlal Nehru Medical College, Wardha 442001, India;
| | - Suvidha Manne
- Department of Medicine, Mamata Medical College, Khammam 507002, India; (P.G.); (S.P.L.); (S.M.)
| | - Adil Mohammed
- Department of Internal Medicine, Central Michigan University College of Medicine, Saginaw, MI 48602, USA;
| | - Hiren Koshiya
- Department of Internal Medicine, Prime West Consortium, Inglewood, CA 92395, USA;
| | - Nakeya Dewaswala
- Department of Cardiology, University of Kentucky, Lexington, KY 40536, USA;
| | - Rupak Desai
- Independent Researcher, Atlanta, GA 30079, USA;
| | - Huzaifa Bhopalwala
- Department of Internal Medicine, Appalachian Regional Hospital, Hazard, KY 41701, USA; (H.B.); (S.G.)
| | - Shyam Ganti
- Department of Internal Medicine, Appalachian Regional Hospital, Hazard, KY 41701, USA; (H.B.); (S.G.)
| | - Salim Surani
- Departmet of Pulmonary, Critical Care Medicine, Texas A&M University, College Station, TX 77845, USA;
| |
Collapse
|
55
|
Dong Y, Li X, Yang Y, Wang M, Gao B. A Synthesizing Semantic Characteristics Lung Nodules Classification Method Based on 3D Convolutional Neural Network. Bioengineering (Basel) 2023; 10:1245. [PMID: 38002369 PMCID: PMC10669569 DOI: 10.3390/bioengineering10111245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 09/30/2023] [Accepted: 10/11/2023] [Indexed: 11/26/2023] Open
Abstract
Early detection is crucial for the survival and recovery of lung cancer patients. Computer-aided diagnosis system can assist in the early diagnosis of lung cancer by providing decision support. While deep learning methods are increasingly being applied to tasks such as CAD (Computer-aided diagnosis system), these models lack interpretability. In this paper, we propose a convolutional neural network model that combines semantic characteristics (SCCNN) to predict whether a given pulmonary nodule is malignant. The model synthesizes the advantages of multi-view, multi-task and attention modules in order to fully simulate the actual diagnostic process of radiologists. The 3D (three dimensional) multi-view samples of lung nodules are extracted by spatial sampling method. Meanwhile, semantic characteristics commonly used in radiology reports are used as an auxiliary task and serve to explain how the model interprets. The introduction of the attention module in the feature fusion stage improves the classification of lung nodules as benign or malignant. Our experimental results using the LIDC-IDRI (Lung Image Database Consortium and Image Database Resource Initiative) show that this study achieves 95.45% accuracy and 97.26% ROC (Receiver Operating Characteristic) curve area. The results show that the method we proposed not only realize the classification of benign and malignant compared to standard 3D CNN approaches but can also be used to intuitively explain how the model makes predictions, which can assist clinical diagnosis.
Collapse
Affiliation(s)
| | - Xiaoqin Li
- Faculty of Environment and Life, Beijing University of Technology, Beijing 100124, China; (Y.D.); (Y.Y.); (M.W.); (B.G.)
| | | | | | | |
Collapse
|
56
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Nuklearmedizin 2023; 62:306-313. [PMID: 37802058 DOI: 10.1055/a-2157-6670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET..
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
57
|
Siddiqui EA, Chaurasia V, Shandilya M. Classification of lung cancer computed tomography images using a 3-dimensional deep convolutional neural network with multi-layer filter. J Cancer Res Clin Oncol 2023; 149:11279-11294. [PMID: 37368121 DOI: 10.1007/s00432-023-04992-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 06/15/2023] [Indexed: 06/28/2023]
Abstract
Lung cancer creates pulmonary nodules in the patient's lung, which may be diagnosed early on using computer-aided diagnostics. A novel automated pulmonary nodule diagnosis technique using three-dimensional deep convolutional neural networks and multi-layered filter has been presented in this paper. For the suggested automated diagnosis of lung nodule, volumetric computed tomographic images are employed. The proposed approach generates three-dimensional feature layers, which retain the temporal links between adjacent slices of computed tomographic images. The use of several activation functions at different levels of the proposed network results in increased feature extraction and efficient classification. The suggested approach divides lung volumetric computed tomography pictures into malignant and benign categories. The suggested technique's performance is evaluated using three commonly used datasets in the domain: LUNA 16, LIDC-IDRI, and TCIA. The proposed method outperforms the state-of-the-art in terms of accuracy, sensitivity, specificity, F-1 score, false-positive rate, false-negative rate, and error rate.
Collapse
Affiliation(s)
| | | | - Madhu Shandilya
- Maulana Azad National Institute of Technology, Bhopal, 462003, India
| |
Collapse
|
58
|
Jenkin Suji R, Bhadauria SS, Wilfred Godfrey W. A survey and taxonomy of 2.5D approaches for lung segmentation and nodule detection in CT images. Comput Biol Med 2023; 165:107437. [PMID: 37717526 DOI: 10.1016/j.compbiomed.2023.107437] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 08/20/2023] [Accepted: 08/28/2023] [Indexed: 09/19/2023]
Abstract
CAD systems for lung cancer diagnosis and detection can significantly offer unbiased, infatiguable diagnostics with minimal variance, decreasing the mortality rate and the five-year survival rate. Lung segmentation and lung nodule detection are critical steps in the lung cancer CAD system pipeline. Literature on lung segmentation and lung nodule detection mostly comprises techniques that process 3-D volumes or 2-D slices and surveys. However, surveys that highlight 2.5D techniques for lung segmentation and lung nodule detection still need to be included. This paper presents a background and discussion on 2.5D methods to fill this gap. Further, this paper also gives a taxonomy of 2.5D approaches and a detailed description of the 2.5D approaches. Based on the taxonomy, various 2.5D techniques for lung segmentation and lung nodule detection are clustered into these 2.5D approaches, which is followed by possible future work in this direction.
Collapse
|
59
|
Zhi L, Jiang W, Zhang S, Zhou T. Deep neural network pulmonary nodule segmentation methods for CT images: Literature review and experimental comparisons. Comput Biol Med 2023; 164:107321. [PMID: 37595518 DOI: 10.1016/j.compbiomed.2023.107321] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 05/08/2023] [Accepted: 08/07/2023] [Indexed: 08/20/2023]
Abstract
Automatic and accurate segmentation of pulmonary nodules in CT images can help physicians perform more accurate quantitative analysis, diagnose diseases, and improve patient survival. In recent years, with the development of deep learning technology, pulmonary nodule segmentation methods based on deep neural networks have gradually replaced traditional segmentation methods. This paper reviews the recent pulmonary nodule segmentation algorithms based on deep neural networks. First, the heterogeneity of pulmonary nodules, the interpretability of segmentation results, and external environmental factors are discussed, and then the open-source 2D and 3D models in medical segmentation tasks in recent years are applied to the Lung Image Database Consortium and Image Database Resource Initiative (LIDC) and Lung Nodule Analysis 16 (Luna16) datasets for comparison, and the visual diagnostic features marked by radiologists are evaluated one by one. According to the analysis of the experimental data, the following conclusions are drawn: (1) In the pulmonary nodule segmentation task, the performance of the 2D segmentation models DSC is generally better than that of the 3D segmentation models. (2) 'Subtlety', 'Sphericity', 'Margin', 'Texture', and 'Size' have more influence on pulmonary nodule segmentation, while 'Lobulation', 'Spiculation', and 'Benign and Malignant' features have less influence on pulmonary nodule segmentation. (3) Higher accuracy in pulmonary nodule segmentation can be achieved based on better-quality CT images. (4) Good contextual information acquisition and attention mechanism design positively affect pulmonary nodule segmentation.
Collapse
Affiliation(s)
- Lijia Zhi
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Wujun Jiang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China.
| | - Shaomin Zhang
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; Medical Imaging Center, Ningxia Hui Autonomous Region People's Hospital, Yinchuan, 750000, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| | - Tao Zhou
- School of Computer Science and Engineering, North Minzu University, Yinchuan, 750021, China; The Key Laboratory of Images & Graphics Intelligent Processing of State Ethnic Affairs Commission, Yinchuan, 750021, China.
| |
Collapse
|
60
|
Zhang S, Wu J, Shi E, Yu S, Gao Y, Li LC, Kuo LR, Pomeroy MJ, Liang ZJ. MM-GLCM-CNN: A multi-scale and multi-level based GLCM-CNN for polyp classification. Comput Med Imaging Graph 2023; 108:102257. [PMID: 37301171 DOI: 10.1016/j.compmedimag.2023.102257] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 05/04/2023] [Accepted: 05/30/2023] [Indexed: 06/12/2023]
Abstract
Distinguishing malignant from benign lesions has significant clinical impacts on both early detection and optimal management of those early detections. Convolutional neural network (CNN) has shown great potential in medical imaging applications due to its powerful feature learning capability. However, it is very challenging to obtain pathological ground truth, addition to collected in vivo medical images, to construct objective training labels for feature learning, leading to the difficulty of performing lesion diagnosis. This is contrary to the requirement that CNN algorithms need a large number of datasets for the training. To explore the ability to learn features from small pathologically-proven datasets for differentiation of malignant from benign polyps, we propose a Multi-scale and Multi-level based Gray-level Co-occurrence Matrix CNN (MM-GLCM-CNN). Specifically, instead of inputting the lesions' medical images, the GLCM, which characterizes the lesion heterogeneity in terms of image texture characteristics, is fed into the MM-GLCN-CNN model for the training. This aims to improve feature extraction by introducing multi-scale and multi-level analysis into the construction of lesion texture characteristic descriptors (LTCDs). To learn and fuse multiple sets of LTCDs from small datasets for lesion diagnosis, we further propose an adaptive multi-input CNN learning framework. Furthermore, an Adaptive Weight Network is used to highlight important information and suppress redundant information after the fusion of the LTCDs. We evaluated the performance of MM-GLCM-CNN by the area under the receiver operating characteristic curve (AUC) merit on small private lesion datasets of colon polyps. The AUC score reaches 93.99% with a gain of 1.49% over current state-of-the-art lesion classification methods on the same dataset. This gain indicates the importance of incorporating lesion characteristic heterogeneity for the prediction of lesion malignancy using small pathologically-proven datasets.
Collapse
Affiliation(s)
- Shu Zhang
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China.
| | - Jinru Wu
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China
| | - Enze Shi
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China
| | - Sigang Yu
- Center for Brain and Brain-Inspired Computing Research, School of Computer Science, Northwestern Polytechnical University, Xi'an 710000, China
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Lihong Connie Li
- Department of Engineering & Environmental Science, City University of New York, Staten Island, NY 10314, USA
| | - Licheng Ryan Kuo
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Marc Jason Pomeroy
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Zhengrong Jerome Liang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA; Department of Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
61
|
Riaz Z, Khan B, Abdullah S, Khan S, Islam MS. Lung Tumor Image Segmentation from Computer Tomography Images Using MobileNetV2 and Transfer Learning. Bioengineering (Basel) 2023; 10:981. [PMID: 37627866 PMCID: PMC10451633 DOI: 10.3390/bioengineering10080981] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/14/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND Lung cancer is one of the most fatal cancers worldwide, and malignant tumors are characterized by the growth of abnormal cells in the tissues of lungs. Usually, symptoms of lung cancer do not appear until it is already at an advanced stage. The proper segmentation of cancerous lesions in CT images is the primary method of detection towards achieving a completely automated diagnostic system. METHOD In this work, we developed an improved hybrid neural network via the fusion of two architectures, MobileNetV2 and UNET, for the semantic segmentation of malignant lung tumors from CT images. The transfer learning technique was employed and the pre-trained MobileNetV2 was utilized as an encoder of a conventional UNET model for feature extraction. The proposed network is an efficient segmentation approach that performs lightweight filtering to reduce computation and pointwise convolution for building more features. Skip connections were established with the Relu activation function for improving model convergence to connect the encoder layers of MobileNetv2 to decoder layers in UNET that allow the concatenation of feature maps with different resolutions from the encoder to decoder. Furthermore, the model was trained and fine-tuned on the training dataset acquired from the Medical Segmentation Decathlon (MSD) 2018 Challenge. RESULTS The proposed network was tested and evaluated on 25% of the dataset obtained from the MSD, and it achieved a dice score of 0.8793, recall of 0.8602 and precision of 0.93. It is pertinent to mention that our technique outperforms the current available networks, which have several phases of training and testing.
Collapse
Affiliation(s)
- Zainab Riaz
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
| | - Bangul Khan
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
- Department of Biomedical Engineering, City University Hongkong, Hong Kong SAR, China
| | - Saad Abdullah
- Division of Intelligent Future Technologies, School of Innovation, Design and Engineering, Mälardalen University, P.O. Box 883, 721 23 Västerås, Sweden
| | - Samiullah Khan
- Center for Eye & Vision Research, 17W Science Park, Hong Kong SAR, China;
| | - Md Shohidul Islam
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
| |
Collapse
|
62
|
Thanoon MA, Zulkifley MA, Mohd Zainuri MAA, Abdani SR. A Review of Deep Learning Techniques for Lung Cancer Screening and Diagnosis Based on CT Images. Diagnostics (Basel) 2023; 13:2617. [PMID: 37627876 PMCID: PMC10453592 DOI: 10.3390/diagnostics13162617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 07/26/2023] [Accepted: 08/02/2023] [Indexed: 08/27/2023] Open
Abstract
One of the most common and deadly diseases in the world is lung cancer. Only early identification of lung cancer can increase a patient's probability of survival. A frequently used modality for the screening and diagnosis of lung cancer is computed tomography (CT) imaging, which provides a detailed scan of the lung. In line with the advancement of computer-assisted systems, deep learning techniques have been extensively explored to help in interpreting the CT images for lung cancer identification. Hence, the goal of this review is to provide a detailed review of the deep learning techniques that were developed for screening and diagnosing lung cancer. This review covers an overview of deep learning (DL) techniques, the suggested DL techniques for lung cancer applications, and the novelties of the reviewed methods. This review focuses on two main methodologies of deep learning in screening and diagnosing lung cancer, which are classification and segmentation methodologies. The advantages and shortcomings of current deep learning models will also be discussed. The resultant analysis demonstrates that there is a significant potential for deep learning methods to provide precise and effective computer-assisted lung cancer screening and diagnosis using CT scans. At the end of this review, a list of potential future works regarding improving the application of deep learning is provided to spearhead the advancement of computer-assisted lung cancer diagnosis systems.
Collapse
Affiliation(s)
- Mohammad A. Thanoon
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
- System and Control Engineering Department, College of Electronics Engineering, Ninevah University, Mosul 41002, Iraq
| | - Mohd Asyraf Zulkifley
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
| | - Muhammad Ammirrul Atiqi Mohd Zainuri
- Department of Electrical, Electronic and Systems Engineering, Faculty of Engineering and Built Environment, University Kebangsaan Malaysia, Bangi 43600, Malaysia;
| | - Siti Raihanah Abdani
- School of Computing Sciences, College of Computing, Informatics and Media, Universiti Teknologi MARA, Shah Alam 40450, Malaysia;
| |
Collapse
|
63
|
Bishnoi V, Goel N. Tensor-RT-Based Transfer Learning Model for Lung Cancer Classification. J Digit Imaging 2023; 36:1364-1375. [PMID: 37059889 PMCID: PMC10407002 DOI: 10.1007/s10278-023-00822-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 03/26/2023] [Accepted: 03/28/2023] [Indexed: 04/16/2023] Open
Abstract
Cancer is a leading cause of death across the globe, in which lung cancer constitutes the maximum mortality rate. Early diagnosis through computed tomography scan imaging helps to identify the stages of lung cancer. Several deep learning-based classification methods have been employed for developing automatic systems for the diagnosis and detection of computed tomography scan lung slices. However, the diagnosis based on nodule detection is a challenging task as it requires manual annotation of nodule regions. Also, these computer-aided systems have yet not achieved the desired performance in real-time lung cancer classification. In the present paper, a high-speed real-time transfer learning-based framework is proposed for the classification of computed tomography lung cancer slices into benign and malignant. The proposed framework comprises of three modules: (i) pre-processing and segmentation of lung images using K-means clustering based on cosine distance and morphological operations; (ii) tuning and regularization of the proposed model named as weighted VGG deep network (WVDN); (iii) model inference in Nvidia tensor-RT during post-processing for the deployment in real-time applications. In this study, two pre-trained CNN models were experimented and compared with the proposed model. All the models have been trained on 19,419 computed tomography scan lung slices, which were obtained from the publicly available Lung Image Database Consortium and Image Database Resource Initiative dataset. The proposed model achieved the best classification metric, an accuracy of 0.932, precision, recall, an F1 score of 0.93, and Cohen's kappa score of 0.85. A statistical evaluation has also been performed on the classification parameters and achieved a p-value <0.0001 for the proposed model. The quantitative and statistical results validate the improved performance of the proposed model as compared to state-of-the-art methods. The proposed framework is based on complete computed tomography slices rather than the marked annotations and may help in improving clinical diagnosis.
Collapse
Affiliation(s)
- Vidhi Bishnoi
- Indira Gandhi Delhi Technical University for Women, Delhi, India
| | - Nidhi Goel
- Indira Gandhi Delhi Technical University for Women, Delhi, India
| |
Collapse
|
64
|
Zhang Z, Tie Y, Zhang D, Liu F, Qi L. Quantum-Involution inspire false positive reduction in pulmonary nodule detection. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023]
|
65
|
Wang X, Su R, Xie W, Wang W, Xu Y, Mann R, Han J, Tan T. 2.75D: Boosting learning by representing 3D Medical imaging to 2D features for small data. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104858] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/22/2023]
|
66
|
Ahmed I, Chehri A, Jeon G, Piccialli F. Automated Pulmonary Nodule Classification and Detection Using Deep Learning Architectures. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2445-2456. [PMID: 35853048 DOI: 10.1109/tcbb.2022.3192139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Recent advancement in biomedical imaging technologies has contributed to tremendous opportunities for the health care sector and the biomedical community. However, collecting, measuring, and analyzing large volumes of health-related data like images is a laborious and time-consuming job for medical experts. Thus, in this regard, artificial intelligence applications (including machine and deep learning systems) help in the early diagnosis of various contagious/ cancerous diseases such as lung cancer. As lung or pulmonary cancer may have no apparent or clear initial symptoms, it is essential to develop and promote a Computer Aided Detection (CAD) system that can support medical experts in classifying and detecting lung nodules at early stages. Therefore, in this article, we analyze the problem of lung cancer diagnosis by classification and detecting pulmonary nodules, i.e., benign and malignant, in CT images. To achieve this objective, an automated deep learning based system is introduced for classifying and detecting lung nodules. In addition, we use novel state-of-the-art detection architectures, including, Faster-RCNN, YOLOv3, and SSD, for detection purposes. All deep learning models are evaluated using a publicly available benchmark LIDC-IDRI data set. The experimental outcomes reveal that the False Positive Rate (FPR) is reduced, and the accuracy is enhanced.
Collapse
|
67
|
Asiri AA, Shaf A, Ali T, Aamir M, Irfan M, Alqahtani S, Mehdar KM, Halawani HT, Alghamdi AH, Alshamrani AFA, Alqhtani SM. Brain Tumor Detection and Classification Using Fine-Tuned CNN with ResNet50 and U-Net Model: A Study on TCGA-LGG and TCIA Dataset for MRI Applications. Life (Basel) 2023; 13:1449. [PMID: 37511824 PMCID: PMC10381218 DOI: 10.3390/life13071449] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/17/2023] [Accepted: 06/23/2023] [Indexed: 07/30/2023] Open
Abstract
Nowadays, brain tumors have become a leading cause of mortality worldwide. The brain cells in the tumor grow abnormally and badly affect the surrounding brain cells. These cells could be either cancerous or non-cancerous types, and their symptoms can vary depending on their location, size, and type. Due to its complex and varying structure, detecting and classifying the brain tumor accurately at the initial stages to avoid maximum death loss is challenging. This research proposes an improved fine-tuned model based on CNN with ResNet50 and U-Net to solve this problem. This model works on the publicly available dataset known as TCGA-LGG and TCIA. The dataset consists of 120 patients. The proposed CNN and fine-tuned ResNet50 model are used to detect and classify the tumor or no-tumor images. Furthermore, the U-Net model is integrated for the segmentation of the tumor regions correctly. The model performance evaluation metrics are accuracy, intersection over union, dice similarity coefficient, and similarity index. The results from fine-tuned ResNet50 model are IoU: 0.91, DSC: 0.95, SI: 0.95. In contrast, U-Net with ResNet50 outperforms all other models and correctly classified and segmented the tumor region.
Collapse
Affiliation(s)
- Abdullah A Asiri
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia
| | - Ahmad Shaf
- Department of Computer Science, Sahiwal Campus, COMSATS University Islamabad, Sahiwal 57000, Pakistan
| | - Tariq Ali
- Department of Computer Science, Sahiwal Campus, COMSATS University Islamabad, Sahiwal 57000, Pakistan
| | - Muhammad Aamir
- Department of Computer Science, Sahiwal Campus, COMSATS University Islamabad, Sahiwal 57000, Pakistan
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran 61441, Saudi Arabia
| | - Saeed Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran 61441, Saudi Arabia
| | - Khlood M Mehdar
- Anatomy Department, Medicine College, Najran University, Najran 61441, Saudi Arabia
| | - Hanan Talal Halawani
- Computer Science Department, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
| | - Ali H Alghamdi
- Department of Radiological Sciences, Faculty of Applied Medical Sciences, The University of Tabuk, Tabuk 47512, Saudi Arabia
| | - Abdullah Fahad A Alshamrani
- Department of Diagnostic Radiology Technology, College of Applied Medical Sciences, Taibah University, Madinah 42353, Saudi Arabia
| | - Samar M Alqhtani
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, Najran 61441, Saudi Arabia
| |
Collapse
|
68
|
Li H, Yang Z. Torsional nystagmus recognition based on deep learning for vertigo diagnosis. Front Neurosci 2023; 17:1160904. [PMID: 37360163 PMCID: PMC10288185 DOI: 10.3389/fnins.2023.1160904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 05/22/2023] [Indexed: 06/28/2023] Open
Abstract
Introduction Detection of torsional nystagmus can help identify the canal of origin in benign paroxysmal positional vertigo (BPPV). Most currently available pupil trackers do not detect torsional nystagmus. In view of this, a new deep learning network model was designed for the determination of torsional nystagmus. Methods The data set comes from the Eye, Ear, Nose and Throat (Eye&ENT) Hospital of Fudan University. In the process of data acquisition, the infrared videos were obtained from eye movement recorder. The dataset contains 24521 nystagmus videos. All torsion nystagmus videos were annotated by the ophthalmologist of the hospital. 80% of the data set was used to train the model, and 20% was used to test. Results Experiments indicate that the designed method can effectively identify torsional nystagmus. Compared with other methods, it has high recognition accuracy. It can realize the automatic recognition of torsional nystagmus and provides support for the posterior and anterior canal BPPV diagnosis. Discussion Our present work complements existing methods of 2D nystagmus analysis and could improve the diagnostic capabilities of VNG in multiple vestibular disorders. To automatically pick BPV requires detection of nystagmus in all 3 planes and identification of a paroxysm. This is the next research work to be carried out.
Collapse
|
69
|
Mei M, Ye Z, Zha Y. An integrated convolutional neural network for classifying small pulmonary solid nodules. Front Neurosci 2023; 17:1152222. [PMID: 37332867 PMCID: PMC10272407 DOI: 10.3389/fnins.2023.1152222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2023] [Accepted: 04/17/2023] [Indexed: 06/20/2023] Open
Abstract
Achieving accurate classification of benign and malignant pulmonary nodules is essential for treating some diseases. However, traditional typing methods have difficulty obtaining satisfactory results on small pulmonary solid nodules, mainly caused by two aspects: (1) noise interference from other tissue information; (2) missing features of small nodules caused by downsampling in traditional convolutional neural networks. To solve these problems, this paper proposes a new typing method to improve the diagnosis rate of small pulmonary solid nodules in CT images. Specifically, first, we introduce the Otsu thresholding algorithm to preprocess the data and filter the interference information. Then, to acquire more small nodule features, we add parallel radiomics to the 3D convolutional neural network. Radiomics can extract a large number of quantitative features from medical images. Finally, the classifier generated more accurate results by the visual and radiomic features. In the experiments, we tested the proposed method on multiple data sets, and the proposed method outperformed other methods in the small pulmonary solid nodule classification task. In addition, various groups of ablation experiments demonstrated that the Otsu thresholding algorithm and radiomics are helpful for the judgment of small nodules and proved that the Otsu thresholding algorithm is more flexible than the manual thresholding algorithm.
Collapse
Affiliation(s)
- Mengqing Mei
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Zhiwei Ye
- School of Computer Science, Hubei University of Technology, Wuhan, China
| | - Yunfei Zha
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan, China
| |
Collapse
|
70
|
Chang S, Gao Y, Pomeroy MJ, Bai T, Zhang H, Lu S, Pickhardt PJ, Gupta A, Reiter MJ, Gould ES, Liang Z. Exploring Dual-Energy CT Spectral Information for Machine Learning-Driven Lesion Diagnosis in Pre-Log Domain. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1835-1845. [PMID: 37022248 PMCID: PMC10238622 DOI: 10.1109/tmi.2023.3240847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In this study, we proposed a computer-aided diagnosis (CADx) framework under dual-energy spectral CT (DECT), which operates directly on the transmission data in the pre-log domain, called CADxDE, to explore the spectral information for lesion diagnosis. The CADxDE includes material identification and machine learning (ML) based CADx. Benefits from DECT's capability of performing virtual monoenergetic imaging with the identified materials, the responses of different tissue types (e.g., muscle, water, and fat) in lesions at each energy can be explored by ML for CADx. Without losing essential factors in the DECT scan, a pre-log domain model-based iterative reconstruction is adopted to obtain decomposed material images, which are then used to generate the virtual monoenergetic images (VMIs) at selected n energies. While these VMIs have the same anatomy, their contrast distribution patterns contain rich information along with the n energies for tissue characterization. Thus, a corresponding ML-based CADx is developed to exploit the energy-enhanced tissue features for differentiating malignant from benign lesions. Specifically, an original image-driven multi-channel three-dimensional convolutional neural network (CNN) and extracted lesion feature-based ML CADx methods are developed to show the feasibility of CADxDE. Results from three pathologically proven clinical datasets showed 4.01% to 14.25% higher AUC (area under the receiver operating characteristic curve) scores than the scores of both the conventional DECT data (high and low energy spectrum separately) and the conventional CT data. The mean gain >9.13% in AUC scores indicated that the energy spectral-enhanced tissue features from CADxDE have great potential to improve lesion diagnosis performance.
Collapse
Affiliation(s)
- Shaojie Chang
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Yongfeng Gao
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Marc J. Pomeroy
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Ti Bai
- Department of Radiation Oncology, University of Texas Southwestern Medical Centre, Dallas, TX 75390, USA
| | - Hao Zhang
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, NY 10065, USA
| | - Siming Lu
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| | - Perry J. Pickhardt
- Department of Radiology, School of Medicine, University of Wisconsin, Madison, WI 53792, USA
| | - Amit Gupta
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Michael J. Reiter
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Elaine S. Gould
- Department of Radiology, Stony Brook University, Stony Brook, NY 11794, USA
| | - Zhengrong Liang
- Departments of Radiology and Biomedical Engineering, Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
71
|
Yang D, Wang J, Long A, Wang X, Zhang Y, Han D. Radiological analysis of coronal angulation of femoral neck fracture. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2023. [DOI: 10.1016/j.jrras.2023.100550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2023]
|
72
|
Zhang L, Tang L, Xia M, Cao G. The application of artificial intelligence in glaucoma diagnosis and prediction. Front Cell Dev Biol 2023; 11:1173094. [PMID: 37215077 PMCID: PMC10192631 DOI: 10.3389/fcell.2023.1173094] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Accepted: 04/24/2023] [Indexed: 05/24/2023] Open
Abstract
Artificial intelligence is a multidisciplinary and collaborative science, the ability of deep learning for image feature extraction and processing gives it a unique advantage in dealing with problems in ophthalmology. The deep learning system can assist ophthalmologists in diagnosing characteristic fundus lesions in glaucoma, such as retinal nerve fiber layer defects, optic nerve head damage, optic disc hemorrhage, etc. Early detection of these lesions can help delay structural damage, protect visual function, and reduce visual field damage. The development of deep learning led to the emergence of deep convolutional neural networks, which are pushing the integration of artificial intelligence with testing devices such as visual field meters, fundus imaging and optical coherence tomography to drive more rapid advances in clinical glaucoma diagnosis and prediction techniques. This article details advances in artificial intelligence combined with visual field, fundus photography, and optical coherence tomography in the field of glaucoma diagnosis and prediction, some of which are familiar and some not widely known. Then it further explores the challenges at this stage and the prospects for future clinical applications. In the future, the deep cooperation between artificial intelligence and medical technology will make the datasets and clinical application rules more standardized, and glaucoma diagnosis and prediction tools will be simplified in a single direction, which will benefit multiple ethnic groups.
Collapse
Affiliation(s)
- Linyu Zhang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Li Tang
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
| | - Min Xia
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| | - Guofan Cao
- The Affiliated Eye Hospital of Nanjing Medical University, Nanjing, China
- The Fourth School of Clinical Medicine, Nanjing Medical University, Nanjing, China
| |
Collapse
|
73
|
Midya A, Chakraborty J, Srouji R, Narayan RR, Boerner T, Zheng J, Pak LM, Creasy JM, Escobar LA, Harrington KA, Gonen M, D'Angelica MI, Kingham TP, Do RKG, Jarnagin WR, Simpson AL. Computerized Diagnosis of Liver Tumors From CT Scans Using a Deep Neural Network Approach. IEEE J Biomed Health Inform 2023; 27:2456-2464. [PMID: 37027632 PMCID: PMC10245221 DOI: 10.1109/jbhi.2023.3248489] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
The liver is a frequent site of benign and malignant, primary and metastatic tumors. Hepatocellular carcinoma (HCC) and intrahepatic cholangiocarcinoma (ICC) are the most common primary liver cancers, and colorectal liver metastasis (CRLM) is the most common secondary liver cancer. Although the imaging characteristic of these tumors is central to optimal clinical management, it relies on imaging features that are often non-specific, overlap, and are subject to inter-observer variability. Thus, in this study, we aimed to categorize liver tumors automatically from CT scans using a deep learning approach that objectively extracts discriminating features not visible to the naked eye. Specifically, we used a modified Inception v3 network-based classification model to classify HCC, ICC, CRLM, and benign tumors from pretreatment portal venous phase computed tomography (CT) scans. Using a multi-institutional dataset of 814 patients, this method achieved an overall accuracy rate of 96%, with sensitivity rates of 96%, 94%, 99%, and 86% for HCC, ICC, CRLM, and benign tumors, respectively, using an independent dataset. These results demonstrate the feasibility of the proposed computer-assisted system as a novel non-invasive diagnostic tool to classify the most common liver tumors objectively.
Collapse
|
74
|
Park YJ, Cho HS, Kim MN. AI Model for Detection of Abdominal Hemorrhage Lesions in Abdominal CT Images. Bioengineering (Basel) 2023; 10:bioengineering10040502. [PMID: 37106689 PMCID: PMC10136064 DOI: 10.3390/bioengineering10040502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 04/14/2023] [Accepted: 04/20/2023] [Indexed: 04/29/2023] Open
Abstract
Information technology has been actively utilized in the field of imaging diagnosis using artificial intelligence (AI), which provides benefits to human health. Readings of abdominal hemorrhage lesions using AI can be utilized in situations where lesions cannot be read due to emergencies or the absence of specialists; however, there is a lack of related research due to the difficulty in collecting and acquiring images. In this study, we processed the abdominal computed tomography (CT) database provided by multiple hospitals for utilization in deep learning and detected abdominal hemorrhage lesions in real time using an AI model designed in a cascade structure using deep learning, a subfield of AI. The AI model was used a detection model to detect lesions distributed in various sizes with high accuracy, and a classification model that could screen out images without lesions was placed before the detection model to solve the problem of increasing false positives owing to the input of images without lesions in actual clinical cases. The developed method achieved 93.22% sensitivity and 99.60% specificity.
Collapse
Affiliation(s)
- Young-Jin Park
- Division of Electronics and Information System, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| | - Hui-Sup Cho
- Division of Electronics and Information System, Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu 42988, Republic of Korea
| | - Myoung-Nam Kim
- Department of Biomedical Engineering, School of Medicine, Kyungpook National University, Daegu 41566, Republic of Korea
| |
Collapse
|
75
|
Tan H, Xu H, Yu N, Yu Y, Duan H, Fan Q, Zhanyu T. The value of deep learning-based computer aided diagnostic system in improving diagnostic performance of rib fractures in acute blunt trauma. BMC Med Imaging 2023; 23:55. [PMID: 37055752 PMCID: PMC10099632 DOI: 10.1186/s12880-023-01012-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 04/04/2023] [Indexed: 04/15/2023] Open
Abstract
BACKGROUND To evaluate the value of a deep learning-based computer-aided diagnostic system (DL-CAD) in improving the diagnostic performance of acute rib fractures in patients with chest trauma. MATERIALS AND METHODS CT images of 214 patients with acute blunt chest trauma were retrospectively analyzed by two interns and two attending radiologists independently firstly and then with the assistance of a DL-CAD one month later, in a blinded and randomized manner. The consensusdiagnosis of fib fracture by another two senior thoracic radiologists was regarded as reference standard. The rib fracture diagnostic sensitivity, specificity, positive predictive value, diagnostic confidence and mean reading time with and without DL-CAD were calculated and compared. RESULTS There were 680 rib fracture lesions confirmed as reference standard among all patients. The diagnostic sensitivity and positive predictive value of interns weresignificantly improved from (68.82%, 84.50%) to (91.76%, 93.17%) with the assistance of DL-CAD, respectively. Diagnostic sensitivity and positive predictive value of attendings aided by DL-CAD (94.56%, 95.67%) or not aided (86.47%, 93.83%), respectively. In addition, when radiologists were assisted by DL-CAD, the mean reading time was significantly reduced, and diagnostic confidence was significantly enhanced. CONCLUSIONS DL-CAD improves the diagnostic performance of acute rib fracture in chest trauma patients, which increases the diagnostic confidence, sensitivity, and positive predictive value for radiologists. DL-CAD can advance the diagnostic consistency of radiologists with different experiences.
Collapse
Affiliation(s)
- Hui Tan
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Xianyang, China
| | - Hui Xu
- Peter Boris Centre for Addiction Research, McMaster University & St. Joseph's Health Care Hamilton, 100 West 5th Street, Hamilton, ON, L8P 3R2, Canada.
| | - Nan Yu
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Xianyang, China
| | - Yong Yu
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Xianyang, China
| | - Haifeng Duan
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Xianyang, China
| | - Qiuju Fan
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine, Xianyang, China.
| | - Tian Zhanyu
- Institute of Medical Technology, Shaanxi University of Chinese Medicine, Xianyang, China
| |
Collapse
|
76
|
Gainey JC, He Y, Zhu R, Baek SS, Wu X, Buatti JM, Allen BG, Smith BJ, Kim Y. Predictive power of deep-learning segmentation based prognostication model in non-small cell lung cancer. Front Oncol 2023; 13:868471. [PMID: 37081986 PMCID: PMC10110903 DOI: 10.3389/fonc.2023.868471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2022] [Accepted: 03/20/2023] [Indexed: 04/07/2023] Open
Abstract
PurposeThe study aims to create a model to predict survival outcomes for non-small cell lung cancer (NSCLC) after treatment with stereotactic body radiotherapy (SBRT) using deep-learning segmentation based prognostication (DESEP).MethodsThe DESEP model was trained using imaging from 108 patients with NSCLC with various clinical stages and treatment histories. The model generated predictions based on unsupervised features learned by a deep-segmentation network from computed tomography imaging to categorize patients into high and low risk groups for overall survival (DESEP-predicted-OS), disease specific survival (DESEP-predicted-DSS), and local progression free survival (DESEP-predicted-LPFS). Serial assessments were also performed using auto-segmentation based volumetric RECISTv1.1 and computer-based unidimensional RECISTv1.1 patients was performed.ResultsThere was a concordance between the DESEP-predicted-LPFS risk category and manually calculated RECISTv1.1 (φ=0.544, p=0.001). Neither the auto-segmentation based volumetric RECISTv1.1 nor the computer-based unidimensional RECISTv1.1 correlated with manual RECISTv1.1 (p=0.081 and p=0.144, respectively). While manual RECISTv1.1 correlated with LPFS (HR=6.97,3.51-13.85, c=0.70, p<0.001), it could not provide insight regarding DSS (p=0.942) or OS (p=0.662). In contrast, the DESEP-predicted methods were predictive of LPFS (HR=3.58, 1.66-7.18, c=0.60, p<0.001), OS (HR=6.31, 3.65-10.93, c=0.71, p<0.001) and DSS (HR=9.25, 4.50-19.02, c=0.69, p<0.001). The promising results of the DESEP model were reproduced for the independent, external datasets of Stanford University, classifying survival and ‘dead’ group in their Kaplan-Meyer curves (p = 0.019).ConclusionDeep-learning segmentation based prognostication can predict LPFS as well as OS, and DSS after SBRT for NSCLC. It can be used in conjunction with current standard of care, manual RECISTv1.1 to provide additional insights regarding DSS and OS in NSCLC patients receiving SBRT.SummaryWhile current standard of care, manual RECISTv1.1 correlated with local progression free survival (LPFS) (HR=6.97,3.51-13.85, c=0.70, p<0.001), it could not provide insight regarding disease specific survival (DSS) (p=0.942) or overall survival (OS) (p=0.662). In contrast, the deep-learning segmentation based prognostication (DESEP)-predicted methods were predictive of LPFS (HR=3.58, 1.66-7.18, c=0.60, p<0.001), OS (HR=6.31, 3.65-10.93, c=0.71, p<0.001) and DSS (HR=9.25, 4.50-19.02, c=0.69, p<0.001). DESEP can be used in conjunction with current standard of care, manual RECISTv1.1 to provide additional insights regarding DSS and OS in NSCLC patients.
Collapse
Affiliation(s)
- Jordan C. Gainey
- Department of Radiation Oncology, The University of Iowa, Iowa City, IA, United States
| | - Yusen He
- Department of Data Science, Grinnell College, Grinnell, IA, United States
| | - Robert Zhu
- Department of Radiation Oncology, The University of Iowa, Iowa City, IA, United States
| | - Stephen S. Baek
- Department of Data Science, University of Virginia, Charlottesville, VA, United States
| | - Xiaodong Wu
- Department of Radiation Oncology, The University of Iowa, Iowa City, IA, United States
| | - John M. Buatti
- Department of Radiation Oncology, The University of Iowa, Iowa City, IA, United States
| | - Bryan G. Allen
- Department of Radiation Oncology, The University of Iowa, Iowa City, IA, United States
| | - Brian J. Smith
- Department of Radiation Oncology, The University of Iowa, Iowa City, IA, United States
| | - Yusung Kim
- Department of Radiation Oncology, MD Anderson Cancer Center, Houston, TX, United States
- *Correspondence: Yusung Kim,
| |
Collapse
|
77
|
Chen Y, Hou X, Yang Y, Ge Q, Zhou Y, Nie S. A Novel Deep Learning Model Based on Multi-Scale and Multi-View for Detection of Pulmonary Nodules. J Digit Imaging 2023; 36:688-699. [PMID: 36544067 PMCID: PMC10039158 DOI: 10.1007/s10278-022-00749-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 11/03/2022] [Accepted: 12/02/2022] [Indexed: 12/24/2022] Open
Abstract
Lung cancer manifests as pulmonary nodules in the early stage. Thus, the early and accurate detection of these nodules is crucial for improving the survival rate of patients. We propose a novel two-stage model for lung nodule detection. In the candidate nodule detection stage, a deep learning model based on 3D context information roughly segments the nodules detects the preprocessed image and obtain candidate nodules. In this model, 3D image blocks are input into the constructed model, and it learns the contextual information between the various slices in the 3D image block. The parameters of our model are equivalent to those of a 2D convolutional neural network (CNN), but the model could effectively learn the 3D context information of the nodules. In the false-positive reduction stage, we propose a multi-scale shared convolutional structure model. Our lung detection model has no significant increase in parameters and computation in both stages of multi-scale and multi-view detection. The proposed model was evaluated by using 888 computed tomography (CT) scans from the LIDC-IDRI dataset and achieved a competition performance metric (CPM) score of 0.957. The average detection sensitivity per scan was 0.971/1.0 FP. Furthermore, an average detection sensitivity of 0.933/1.0 FP per scan was achieved based on data from Shanghai Pulmonary Hospital. Our model exhibited a higher detection sensitivity, a lower false-positive rate, and better generalization than current lung nodule detection methods. The method has fewer parameters and less computational complexity, which provides more possibilities for the clinical application of this method.
Collapse
Affiliation(s)
- Yang Chen
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Xuewen Hou
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yifeng Yang
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Qianqian Ge
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China
| | - Yan Zhou
- Department of Radiology, School of Medicine, Renji Hospital, Shanghai Jiao Tong University, Shanghai, 200127, China.
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, 200093, China.
| |
Collapse
|
78
|
Kumar VD, Rajesh P, Geman O, Craciun MD, Arif M, Filip R. “Quo Vadis Diagnosis”: Application of Informatics in Early Detection of Pneumothorax. Diagnostics (Basel) 2023; 13:diagnostics13071305. [PMID: 37046523 PMCID: PMC10093601 DOI: 10.3390/diagnostics13071305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 03/22/2023] [Accepted: 03/28/2023] [Indexed: 04/03/2023] Open
Abstract
A pneumothorax is a condition that occurs in the lung region when air enters the pleural space—the area between the lung and chest wall—causing the lung to collapse and making it difficult to breathe. This can happen spontaneously or as a result of an injury. The symptoms of a pneumothorax may include chest pain, shortness of breath, and rapid breathing. Although chest X-rays are commonly used to detect a pneumothorax, locating the affected area visually in X-ray images can be time-consuming and prone to errors. Existing computer technology for detecting this disease from X-rays is limited by three major issues, including class disparity, which causes overfitting, difficulty in detecting dark portions of the images, and vanishing gradient. To address these issues, we propose an ensemble deep learning model called PneumoNet, which uses synthetic images from data augmentation to address the class disparity issue and a segmentation system to identify dark areas. Finally, the issue of the vanishing gradient, which becomes very small during back propagation, can be addressed by hyperparameter optimization techniques that prevent the model from slowly converging and poorly performing. Our model achieved an accuracy of 98.41% on the Society for Imaging Informatics in Medicine pneumothorax dataset, outperforming other deep learning models and reducing the computation complexities in detecting the disease.
Collapse
Affiliation(s)
- V. Dhilip Kumar
- School of Computing, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India; (V.D.K.); (P.R.)
| | - P. Rajesh
- School of Computing, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India; (V.D.K.); (P.R.)
| | - Oana Geman
- Department of Computers, Electronics and Automation, Faculty of Electrical Engineering and Computer Science, Stefan cel Mare University of Suceava, 720229 Suceava, Romania
- Correspondence: (O.G.); (M.D.C.)
| | - Maria Daniela Craciun
- Interdisciplinary Research Centre in Motricity Sciences and Human Health, Ştefan cel Mare University of Suceava, 720229 Suceava, Romania
- Correspondence: (O.G.); (M.D.C.)
| | - Muhammad Arif
- Department of Computer Science, Superior University, Lahore 54000, Pakistan;
| | - Roxana Filip
- Faculty of Medicine and Biological Sciences, Stefan cel Mare University of Suceava, 720229 Suceava, Romania;
- Suceava Emergency County Hospital, 720224 Suceava, Romania
| |
Collapse
|
79
|
Salome P, Sforazzini F, Grugnara G, Kudak A, Dostal M, Herold-Mende C, Heiland S, Debus J, Abdollahi A, Knoll M. MR-Class: A Python Tool for Brain MR Image Classification Utilizing One-vs-All DCNNs to Deal with the Open-Set Recognition Problem. Cancers (Basel) 2023; 15:cancers15061820. [PMID: 36980707 PMCID: PMC10046648 DOI: 10.3390/cancers15061820] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 03/11/2023] [Accepted: 03/15/2023] [Indexed: 03/19/2023] Open
Abstract
Background: MR image classification in datasets collected from multiple sources is complicated by inconsistent and missing DICOM metadata. Therefore, we aimed to establish a method for the efficient automatic classification of MR brain sequences. Methods: Deep convolutional neural networks (DCNN) were trained as one-vs-all classifiers to differentiate between six classes: T1 weighted (w), contrast-enhanced T1w, T2w, T2w-FLAIR, ADC, and SWI. Each classifier yields a probability, allowing threshold-based and relative probability assignment while excluding images with low probability (label: unknown, open-set recognition problem). Data from three high-grade glioma (HGG) cohorts was assessed; C1 (320 patients, 20,101 MRI images) was used for training, while C2 (197, 11,333) and C3 (256, 3522) were for testing. Two raters manually checked images through an interactive labeling tool. Finally, MR-Class’ added value was evaluated via radiomics model performance for progression-free survival (PFS) prediction in C2, utilizing the concordance index (C-I). Results: Approximately 10% of annotation errors were observed in each cohort between the DICOM series descriptions and the derived labels. MR-Class accuracy was 96.7% [95% Cl: 95.8, 97.3] for C2 and 94.4% [93.6, 96.1] for C3. A total of 620 images were misclassified; manual assessment of those frequently showed motion artifacts or alterations of anatomy by large tumors. Implementation of MR-Class increased the PFS model C-I by 14.6% on average, compared to a model trained without MR-Class. Conclusions: We provide a DCNN-based method for the sequence classification of brain MR images and demonstrate its usability in two independent HGG datasets.
Collapse
Affiliation(s)
- Patrick Salome
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany
- Heidelberg Medical Faculty, Heidelberg University, 69117 Heidelberg, Germany
- German Cancer Consortium Core Center Heidelberg, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Correspondence: (P.S.); (M.K.)
| | - Francesco Sforazzini
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany
- Heidelberg Medical Faculty, Heidelberg University, 69117 Heidelberg, Germany
- German Cancer Consortium Core Center Heidelberg, 69120 Heidelberg, Germany
| | - Gianluca Grugnara
- Department of Neuroradiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Andreas Kudak
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
- Clinical Cooperation Unit Radiation Therapy, German Cancer Research Center, 69120 Heidelberg, Germany
| | - Matthias Dostal
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
- Clinical Cooperation Unit Radiation Therapy, German Cancer Research Center, 69120 Heidelberg, Germany
| | - Christel Herold-Mende
- Brain Tumour Group, European Organization for Research and Treatment of Cancer, 1200 Brussels, Belgium
- Division of Neurosurgical Research, Department of Neurosurgery, University of Heidelberg, 69117 Heidelberg, Germany
| | - Sabine Heiland
- Department of Neuroradiology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Jürgen Debus
- German Cancer Consortium Core Center Heidelberg, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Amir Abdollahi
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany
- German Cancer Consortium Core Center Heidelberg, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
| | - Maximilian Knoll
- Clinical Cooperation Unit Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany
- German Cancer Consortium Core Center Heidelberg, 69120 Heidelberg, Germany
- Heidelberg Ion-Beam Therapy Center, 69120 Heidelberg, Germany
- Department of Radiation Oncology, Heidelberg University Hospital, 69120 Heidelberg, Germany
- Correspondence: (P.S.); (M.K.)
| |
Collapse
|
80
|
Sebastian AE, Dua D. Lung Nodule Detection via Optimized Convolutional Neural Network: Impact of Improved Moth Flame Algorithm. SENSING AND IMAGING 2023; 24:11. [PMID: 36936054 PMCID: PMC10009866 DOI: 10.1007/s11220-022-00406-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 09/30/2022] [Accepted: 11/02/2022] [Indexed: 06/18/2023]
Abstract
Lung cancer is a high-risk disease that affects people all over the world, and lung nodules are the most common sign of early lung cancer. Since early identification of lung cancer can considerably improve a lung scanner patient's chances of survival, an accurate and efficient nodule detection system can be essential. Automatic lung nodule recognition decreases radiologists' effort, as well as the risk of misdiagnosis and missed diagnoses. Hence, this article developed a new lung nodule detection model with four stages like "Image pre-processing, segmentation, feature extraction and classification". In this processes, pre-processing is the first step, in which the input image is subjected to a series of operations. Then, the "Otsu Thresholding model" is used to segment the pre-processed pictures. Then in the third stage, the LBP features are retrieved that is then classified via optimized Convolutional Neural Network (CNN). In this, the activation function and convolutional layer count of CNN is optimally tuned via a proposed algorithm known as Improved Moth Flame Optimization (IMFO). At the end, the betterment of the scheme is validated by carrying out analysis in terms of certain measures. Especially, the accuracy of the proposed work is 6.85%, 2.91%, 1.75%, 0.73%, 1.83%, as well as 4.05% superior to the extant SVM, KNN, CNN, MFO, WTEEB as well as GWO + FRVM methods respectively.
Collapse
Affiliation(s)
| | - Disha Dua
- Indira Gandhi Delhi Technical University for Women, Delhi, Delhi, India
| |
Collapse
|
81
|
Shamrat FJM, Azam S, Karim A, Ahmed K, Bui FM, De Boer F. High-precision multiclass classification of lung disease through customized MobileNetV2 from chest X-ray images. Comput Biol Med 2023; 155:106646. [PMID: 36805218 DOI: 10.1016/j.compbiomed.2023.106646] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 01/30/2023] [Accepted: 02/06/2023] [Indexed: 02/12/2023]
Abstract
In this study, multiple lung diseases are diagnosed with the help of the Neural Network algorithm. Specifically, Emphysema, Infiltration, Mass, Pleural Thickening, Pneumonia, Pneumothorax, Atelectasis, Edema, Effusion, Hernia, Cardiomegaly, Pulmonary Fibrosis, Nodule, and Consolidation, are studied from the ChestX-ray14 dataset. A proposed fine-tuned MobileLungNetV2 model is employed for analysis. Initially, pre-processing is done on the X-ray images from the dataset using CLAHE to increase image contrast. Additionally, a Gaussian Filter, to denoise images, and data augmentation methods are used. The pre-processed images are fed into several transfer learning models; such as InceptionV3, AlexNet, DenseNet121, VGG19, and MobileNetV2. Among these models, MobileNetV2 performed with the highest accuracy of 91.6% in overall classifying lesions on Chest X-ray Images. This model is then fine-tuned to optimise the MobileLungNetV2 model. On the pre-processed data, the fine-tuned model, MobileLungNetV2, achieves an extraordinary classification accuracy of 96.97%. Using a confusion matrix for all the classes, it is determined that the model has an overall high precision, recall, and specificity scores of 96.71%, 96.83% and 99.78% respectively. The study employs the Grad-cam output to determine the heatmap of disease detection. The proposed model shows promising results in classifying multiple lesions on Chest X-ray images.
Collapse
Affiliation(s)
- Fm Javed Mehedi Shamrat
- Department of Software Engineering, Daffodil International University, Birulia, 1216, Dhaka, Bangladesh
| | - Sami Azam
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0909, Australia.
| | - Asif Karim
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0909, Australia.
| | - Kawsar Ahmed
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada; Group of Bio-photomatiχ, Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Tangail, 1902, Bangladesh
| | - Francis M Bui
- Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Friso De Boer
- Faculty of Science and Technology, Charles Darwin University, Casuarina, NT 0909, Australia
| |
Collapse
|
82
|
Yang Y, Li X, Fu J, Han Z, Gao B. 3D multi-view squeeze-and-excitation convolutional neural network for lung nodule classification. Med Phys 2023; 50:1905-1916. [PMID: 36639958 DOI: 10.1002/mp.16221] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 11/15/2022] [Accepted: 12/17/2022] [Indexed: 01/15/2023] Open
Abstract
PURPOSE Early screening is crucial to improve the survival rate and recovery rate of lung cancer patients. Computer-aided diagnosis system (CAD) is a powerful tool to assist clinicians in early diagnosis. Lung nodules are characterized by spatial heterogeneity. However, many attempts use the two-dimensional multi-view (MV) framework to learn and simply integrate multiple view features. These methods suffer from the problems of not capturing the spatial characteristics effectively and ignoring the variability of multiple views. In this paper, we propose a three-dimensional MV convolutional neural network (3D MVCNN) framework and embed the squeeze-and-excitation (SE) module in it to further address the variability of each view in the MV framework. METHODS First, the 3D multiple view samples of lung nodules are extracted by the spatial sampling method, and a 3D CNN is established to extract 3D abstract features. Second, build a 3D MVCNN framework according to the 3D multiple view samples and 3D CNN. This framework can learn more features of different views of lung nodules, taking into account the characteristics of spatial heterogeneity of lung nodules. Finally, to further address the variability of each view in the MV framework, a 3D MVSECNN model is constructed by introducing a SE module in the feature fusion stage. For training and testing purposes we used independent subsets of the public LIDC-IDRI dataset. RESULTS For the LIDC-IDRI dataset, this study achieved 96.04% accuracy and 98.59% sensitivity in the binary classification, and 87.76% accuracy in the ternary classification, which was higher than other state-of-the-art studies. The consistency score of 0.948 between the model predictions and pathological diagnosis was significantly higher than that between the clinician's annotations and pathological diagnosis. CONCLUSIONS The results show that our proposed method can effectively learn the spatial heterogeneity of nodules and solve the problem of multiple view variability. Moreover, the consistency analysis indicates that our method can provide clinicians with more accurate results of benign-malignant lung nodule classification for auxiliary diagnosis, which is important for assisting clinicians in clinical diagnosis.
Collapse
Affiliation(s)
- Yang Yang
- Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Xiaoqin Li
- Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Jipeng Fu
- Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Zhenbo Han
- Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| | - Bin Gao
- Faculty of Environment and Life, Beijing University of Technology, Beijing, China
| |
Collapse
|
83
|
Shen Z, Cao P, Yang J, Zaiane OR. WS-LungNet: A two-stage weakly-supervised lung cancer detection and diagnosis network. Comput Biol Med 2023; 154:106587. [PMID: 36709519 DOI: 10.1016/j.compbiomed.2023.106587] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 01/13/2023] [Accepted: 01/22/2023] [Indexed: 01/26/2023]
Abstract
Computer-aided lung cancer diagnosis (CAD) system on computed tomography (CT) helps radiologists guide preoperative planning and prognosis assessment. The flexibility and scalability of deep learning methods are limited in lung CAD. In essence, two significant challenges to be solved are (1) Label scarcity due to cost annotations of CT images by experienced domain experts, and (2) Label inconsistency between the observed nodule malignancy and the patients' pathology evaluation. These two issues can be considered weak label problems. We address these issues in this paper by introducing a weakly-supervised lung cancer detection and diagnosis network (WS-LungNet), consisting of a semi-supervised computer-aided detection (Semi-CADe) that can segment 3D pulmonary nodules based on unlabeled data through adversarial learning to reduce label scarcity, as well as a cross-nodule attention computer-aided diagnosis (CNA-CADx) for evaluating malignancy at the patient level by modeling correlations between nodules via cross-attention mechanisms and thereby eliminating label inconsistency. Through extensive evaluations on the LIDC-IDRI public database, we show that our proposed method achieves 82.99% competition performance metric (CPM) on pulmonary nodule detection and 88.63% area under the curve (AUC) on lung cancer diagnosis. Extensive experiments demonstrate the advantage of WS-LungNet on nodule detection and malignancy evaluation tasks. Our promising results demonstrate the benefits and flexibility of the semi-supervised segmentation with adversarial learning and the nodule instance correlation learning with the attention mechanism. The results also suggest that making use of the unlabeled data and taking the relationship among nodules in a case into account are essential for lung cancer detection and diagnosis.
Collapse
Affiliation(s)
- Zhiqiang Shen
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Peng Cao
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China.
| | - Jinzhu Yang
- College of Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Shenyang, China
| | - Osmar R Zaiane
- Alberta Machine Intelligence Institute, University of Alberta, Canada
| |
Collapse
|
84
|
Application of deep learning ultrasound imaging in monitoring bone healing after fracture surgery. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2023. [DOI: 10.1016/j.jrras.2022.100493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
85
|
Deep learning ensemble 2D CNN approach towards the detection of lung cancer. Sci Rep 2023; 13:2987. [PMID: 36807576 PMCID: PMC9941084 DOI: 10.1038/s41598-023-29656-z] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 02/08/2023] [Indexed: 02/22/2023] Open
Abstract
In recent times, deep learning has emerged as a great resource to help research in medical sciences. A lot of work has been done with the help of computer science to expose and predict different diseases in human beings. This research uses the Deep Learning algorithm Convolutional Neural Network (CNN) to detect a Lung Nodule, which can be cancerous, from different CT Scan images given to the model. For this work, an Ensemble approach has been developed to address the issue of Lung Nodule Detection. Instead of using only one Deep Learning model, we combined the performance of two or more CNNs so they could perform and predict the outcome with more accuracy. The LUNA 16 Grand challenge dataset has been utilized, which is available online on their website. The dataset consists of a CT scan with annotations that better understand the data and information about each CT scan. Deep Learning works the same way our brain neurons work; therefore, deep learning is based on Artificial Neural Networks. An extensive CT scan dataset is collected to train the deep learning model. CNNs are prepared using the data set to classify cancerous and non-cancerous images. A set of training, validation, and testing datasets is developed, which is used by our Deep Ensemble 2D CNN. Deep Ensemble 2D CNN consists of three different CNNs with different layers, kernels, and pooling techniques. Our Deep Ensemble 2D CNN gave us a great result with 95% combined accuracy, which is higher than the baseline method.
Collapse
|
86
|
Xie RL, Wang Y, Zhao YN, Zhang J, Chen GB, Fei J, Fu Z. Lung nodule pre-diagnosis and insertion path planning for chest CT images. BMC Med Imaging 2023; 23:22. [PMID: 36737717 PMCID: PMC9896815 DOI: 10.1186/s12880-023-00973-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 01/19/2023] [Indexed: 02/05/2023] Open
Abstract
Medical image processing has proven to be effective and feasible for assisting oncologists in diagnosing lung, thyroid, and other cancers, especially at early stage. However, there is no reliable method for the recognition, screening, classification, and detection of nodules, and even deep learning-based methods have limitations. In this study, we mainly explored the automatic pre-diagnosis of lung nodules with the aim of accurately identifying nodules in chest CT images, regardless of the benign and malignant nodules, and the insertion path planning of suspected malignant nodules, used for further diagnosis by robotic-based biopsy puncture. The overall process included lung parenchyma segmentation, classification and pre-diagnosis, 3-D reconstruction and path planning, and experimental verification. First, accurate lung parenchyma segmentation in chest CT images was achieved using digital image processing technologies, such as adaptive gray threshold, connected area labeling, and mathematical morphological boundary repair. Multi-feature weight assignment was then adopted to establish a multi-level classification criterion to complete the classification and pre-diagnosis of pulmonary nodules. Next, 3-D reconstruction of lung regions was performed using voxelization, and on its basis, a feasible local optimal insertion path with an insertion point could be found by avoiding sternums and/or key tissues in terms of the needle-inserting path. Finally, CT images of 900 patients from Lung Image Database Consortium and Image Database Resource Initiative were chosen to verify the validity of pulmonary nodule diagnosis. Our previously designed surgical robotic system and a custom thoracic model were used to validate the effectiveness of the insertion path. This work can not only assist doctors in completing the pre-diagnosis of pulmonary nodules but also provide a reference for clinical biopsy puncture of suspected malignant nodules considered by doctors.
Collapse
Affiliation(s)
- Rong-Li Xie
- grid.16821.3c0000 0004 0368 8293Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025 China
| | - Yao Wang
- grid.16821.3c0000 0004 0368 8293State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, 200240 China
| | - Yan-Na Zhao
- grid.24516.340000000123704535Department of Ultrasound, Tongji Hospital, School of Medicine, Tongji University, Shanghai, 200065 China
| | - Jun Zhang
- grid.16821.3c0000 0004 0368 8293Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025 China
| | - Guang-Biao Chen
- grid.16821.3c0000 0004 0368 8293State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, 200240 China
| | - Jian Fei
- Department of General Surgery, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, 200025, China.
| | - Zhuang Fu
- State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai, 200240, China.
| |
Collapse
|
87
|
Yang L, Liu H, Han J, Xu S, Zhang G, Wang Q, Du Y, Yang F, Zhao X, Shi G. Ultra-low-dose CT lung screening with artificial intelligence iterative reconstruction: evaluation via automatic nodule-detection software. Clin Radiol 2023:S0009-9260(23)00031-4. [PMID: 36948944 DOI: 10.1016/j.crad.2023.01.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2022] [Revised: 01/04/2023] [Accepted: 01/15/2023] [Indexed: 02/05/2023]
Abstract
AIM To test the feasibility of ultra-low-dose (ULD) computed tomography (CT) combined with an artificial intelligence iterative reconstruction (AIIR) algorithm for screening pulmonary nodules using computer-assisted diagnosis (CAD). MATERIALS AND METHODS A chest phantom with artificial pulmonary nodules was first scanned using the routine protocol and the ULD protocol (3.28 versus 0.18 mSv) to compare the image quality and to test the acceptability of the ULD CT protocol. Next, 147 lung-screening patients were enrolled prospectively, undergoing an additional ULD CT immediately after their routine CT examination for clinical validation. Images were reconstructed with filtered back-projection (FBP), hybrid iterative reconstruction (HIR), the AIIR, and were imported to the CAD software for preliminary nodule detection. Subjective image quality on the phantom was scored using a five-point scale and compared using the Mann-Whitney U-test. Nodule detection using CAD was evaluated for ULD HIR and AIIR images using the routine dose image as reference. RESULTS Higher image quality was scored for AIIR than for FBP and HIR at ULD (p<0.001). As reported by CAD, 107 patients were presented with more than five nodules on routine dose images and were chosen to represent the challenging cases at an early stage of pulmonary disease. Among such, the performance of nodule detection by CAD on ULD HIR and AIIR images was 75.2% and 92.2% of the routine dose image, respectively. CONCLUSION Combined with AIIR, it was feasible to use an ULD CT protocol with 95% dose reduction for CAD-based screening of pulmonary nodules.
Collapse
Affiliation(s)
- L Yang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - H Liu
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - J Han
- United Imaging Healthcare, Shanghai, China
| | - S Xu
- United Imaging Healthcare, Shanghai, China
| | - G Zhang
- United Imaging Healthcare, Shanghai, China
| | - Q Wang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Y Du
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - F Yang
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - X Zhao
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - G Shi
- Department of Radiology, The Fourth Hospital of Hebei Medical University, Shijiazhuang, Hebei, China.
| |
Collapse
|
88
|
Pawar SP, Talbar SN. Maximization of lung segmentation of generative adversarial network for using taguchi approach. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2172525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Swati P. Pawar
- SVERI’s College of Engineering Pandharpur, Pandharpur, Maharashtra, India
| | - Sanjay N. Talbar
- Center of Excellence in Signal and Image Processing, SGGS Nanded, Nanded, Maharashtra, India
| |
Collapse
|
89
|
Maurya S, Tiwari S, Mothukuri MC, Tangeda CM, Nandigam RNS, Addagiri DC. A review on recent developments in cancer detection using Machine Learning and Deep Learning models. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
90
|
Lin J, She Q, Chen Y. Pulmonary nodule detection based on IR-UNet + + . Med Biol Eng Comput 2023; 61:485-495. [PMID: 36522521 DOI: 10.1007/s11517-022-02727-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is one of the cancers with the highest incidence rate and death rate worldwide. An initial lesion of the lung appears as nodules in the lungs on CT images, and early and timely diagnosis can greatly improve the survival rate. Automatic detection of lung nodules can greatly improve work efficiency and accuracy rate. However, owing to the three-dimensional complex structure of lung CT data and the variation in shapes and appearances of lung nodules, high-precision detection of pulmonary nodules remains challenging. To address the problem, a new 3D framework IR-UNet + + is proposed for automatic pulmonary nodule detection in this paper. First, the Inception Net and ResNet are combined as the building blocks. Second, the squeeze-and-excitation structure is introduced into building blocks for better feature extraction. Finally, two short skip pathways are redesigned based on the U-shaped network. To verify the effectiveness of our algorithm, systematic experiments are conducted on the LUNA16 dataset. Experimental results show that the proposed network performs better than several existing lung nodule detection methods with the sensitivity of 1 FP/scan, 4 FPs/scan, and 8 FPs/scan being 90.13%, 94.77%, and 95.78%, respectively. Therefore, it comes to the conclusion that our proposed model has achieved superior performance for lung nodule detection.
Collapse
Affiliation(s)
- Jingchao Lin
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, China
| | - Qingshan She
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, China.
| | - Yun Chen
- School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, China.
| |
Collapse
|
91
|
Guo Z, Yang J, Zhao L, Yuan J, Yu H. 3D SAACNet with GBM for the classification of benign and malignant lung nodules. Comput Biol Med 2023; 153:106532. [PMID: 36623436 DOI: 10.1016/j.compbiomed.2022.106532] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2022] [Revised: 12/15/2022] [Accepted: 12/31/2022] [Indexed: 01/05/2023]
Abstract
In view of the low diagnostic accuracy of the current classification methods of benign and malignant pulmonary nodules, this paper proposes a 3D segmentation attention network integrating asymmetric convolution (SAACNet) classification model combined with a gradient boosting machine (GBM). This can make full use of the spatial information of pulmonary nodules. First, the asymmetric convolution (AC) designed in SAACNet can not only strengthen feature extraction but also improve the network's robustness to object flip and rotation detection and improve network performance. Second, the segmentation attention network integrating AC (SAAC) block can effectively extract more fine-grained multiscale spatial information while adaptively recalibrating multidimensional channel attention weights. The SAACNet also uses a dual-path connection for feature reuse, where the model makes full use of features. In addition, this article makes the loss function pay more attention to difficult and misclassified samples by adding adjustment factors. Third, the GBM is used to splice the nodule size, originally cropped nodule pixels, and the depth features learned by SAACNet to improve the prediction accuracy of the overall model. A comprehensive ablation experiment is carried out on the public dataset LUNA16 and compared with other lung nodule classification models. The classification accuracy (ACC) is 95.18%, and the area under the curve (AUC) is 0.977. The results show that this method effectively improves the classification performance of pulmonary nodules. The proposed method has advantages in the classification of benign and malignant pulmonary nodules, and it can effectively assist radiologists in pulmonary nodule classification.
Collapse
Affiliation(s)
- Zhitao Guo
- School of Electronic and Information Engineering, Hebei University of Technology, Tianjin, 300401, China.
| | - Jikai Yang
- School of Electronic and Information Engineering, Hebei University of Technology, Tianjin, 300401, China.
| | - Linlin Zhao
- School of Electronic and Information Engineering, Hebei University of Technology, Tianjin, 300401, China.
| | - Jinli Yuan
- School of Electronic and Information Engineering, Hebei University of Technology, Tianjin, 300401, China.
| | - Hengyong Yu
- Department of Electrical and Computer Engineering, University of Massachusetts Lowell, Lowell, MA, 01854, USA.
| |
Collapse
|
92
|
Guo Q, Wang C, Guo J, Bai H, Xu X, Yang L, Wang J, Chen N, Wang Z, Gan Y, Liu L, Li W, Yi Z. The Gap in the Thickness: Estimating Effectiveness of Pulmonary Nodule Detection in Thick- and Thin-Section CT Images with 3D Deep Neural Networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 229:107290. [PMID: 36502546 DOI: 10.1016/j.cmpb.2022.107290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 08/31/2022] [Accepted: 11/28/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVES There is a noticeable gap in diagnostic evidence strength between the thick and thin scans of Low-Dose CT (LDCT) for pulmonary nodule detection. When the thin scans are needed is unknown, especially when aided with an artificial intelligence nodule detection system. METHODS A case study is conducted with a set of 1,000 pulmonary nodule screening LDCT scans with both thick (5.0mm), and thin (1.0mm) section scans available. Pulmonary nodule detection is performed by human and artificial intelligence models for nodule detection developed using 3D convolutional neural networks (CNNs). The intra-sample consistency is evaluated with thick and thin scans, for both clinical doctor and NN (neural network) models. Free receiver operating characteristic (FROC) is used to measure the accuracy of humans and NNs. RESULTS Trained NNs outperform humans with small nodules < 6.0mm, which is a good complement to human ability. For nodules > 6.0mm, human and NNs perform similarly while human takes a fractional advantage. By allowing a few more FPs, a significant sensitivity improvement can be achieved with NNs. CONCLUSIONS There is a performance gap between the thick and thin scans for pulmonary nodule detection regarding both false negatives and false positives. NNs can help reduce false negatives when the nodules are small and trade off the false negatives for sensitivity. A combination of human and trained NNs is a promising way to achieve a fast and accurate diagnosis.
Collapse
Affiliation(s)
- Quan Guo
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, 24 South Section 1, Yihuan Road, Chengdu, 610065, China
| | - Chengdi Wang
- Department of Respiratory and Critical Care Medicine, West China School/West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Jixiang Guo
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, 24 South Section 1, Yihuan Road, Chengdu, 610065, China
| | - Hongli Bai
- Department of Radiology, West China hospital, Sichuan University, Chengdu, 610041, China
| | - Xiuyuan Xu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, 24 South Section 1, Yihuan Road, Chengdu, 610065, China
| | - Lan Yang
- Department of Respiratory and Critical Care Medicine, West China School/West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Jianyong Wang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, 24 South Section 1, Yihuan Road, Chengdu, 610065, China
| | - Nan Chen
- Department of Thoracic Surgery, West China hospital, Sichuan University, Chengdu, 610041, China
| | - Zihuai Wang
- Department of Thoracic Surgery, West China hospital, Sichuan University, Chengdu, 610041, China
| | - Yuncui Gan
- Department of Respiratory and Critical Care Medicine, West China School/West China Hospital, Sichuan University, Chengdu, 610041, China
| | - Lunxu Liu
- Department of Thoracic Surgery, West China hospital, Sichuan University, Chengdu, 610041, China
| | - Weimin Li
- Department of Respiratory and Critical Care Medicine, West China School/West China Hospital, Sichuan University, Chengdu, 610041, China.
| | - Zhang Yi
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, 24 South Section 1, Yihuan Road, Chengdu, 610065, China.
| |
Collapse
|
93
|
Yu Z, Shi Y. Centralized Space Learning for open-set computer-aided diagnosis. Sci Rep 2023; 13:1630. [PMID: 36717731 PMCID: PMC9886916 DOI: 10.1038/s41598-023-28589-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 01/20/2023] [Indexed: 01/31/2023] Open
Abstract
In computer-aided diagnosis (CAD), diagnosing untrained diseases as known categories will cause serious medical accidents, which makes it crucial to distinguish the new class (open set) meanwhile preserving the known classes (closed set) performance so as to enhance the robustness. However, how to accurately define the decision boundary between known and unknown classes is still an open problem, as unknown classes are never seen during the training process, especially in medical area. Moreover, manipulating the latent distribution of known classes further influences the unknown's and makes it even harder. In this paper, we propose the Centralized Space Learning (CSL) method to address the open-set recognition problem in CADs by learning a centralized space to separate the known and unknown classes with the assistance of proxy images generated by a generative adversarial network (GAN). With three steps, including known space initialization, unknown anchor generation and centralized space refinement, CSL learns the optimized space distribution with unknown samples cluster around the center while the known spread away from the center, achieving a significant identification between the known and the unknown. Extensive experiments on multiple datasets and tasks illustrate the proposed CSL's practicability in CAD and the state-of-the-art open-set recognition performance.
Collapse
Affiliation(s)
- Zhongzhi Yu
- Beijing Academy of Artificial Intelligence Institution, Beijing, China
| | - Yemin Shi
- Beijing Academy of Artificial Intelligence Institution, Beijing, China.
| |
Collapse
|
94
|
Afriyie Y, Weyori BA, Opoku AA. A scaling up approach: a research agenda for medical imaging analysis with applications in deep learning. J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Affiliation(s)
- Yaw Afriyie
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
- Department of Computer Science, Faculty of Information and Communication Technology, SD Dombo University of Business and Integrated Development Studies, Wa, Ghana
| | - Benjamin A. Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| | - Alex A. Opoku
- Department of Mathematics & Statistics, University of Energy and Natural Resources, School of Sciences, Sunyani, Ghana
| |
Collapse
|
95
|
Zhang G, Luo L, Zhang L, Liu Z. Research Progress of Respiratory Disease and Idiopathic Pulmonary Fibrosis Based on Artificial Intelligence. Diagnostics (Basel) 2023; 13:diagnostics13030357. [PMID: 36766460 PMCID: PMC9914063 DOI: 10.3390/diagnostics13030357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 01/06/2023] [Accepted: 01/16/2023] [Indexed: 01/21/2023] Open
Abstract
Machine Learning (ML) is an algorithm based on big data, which learns patterns from the previously observed data through classifying, predicting, and optimizing to accomplish specific tasks. In recent years, there has been rapid development in the field of ML in medicine, including lung imaging analysis, intensive medical monitoring, mechanical ventilation, and there is need for intubation etiology prediction evaluation, pulmonary function evaluation and prediction, obstructive sleep apnea, such as biological information monitoring and so on. ML can have good performance and is a great potential tool, especially in the imaging diagnosis of interstitial lung disease. Idiopathic pulmonary fibrosis (IPF) is a major problem in the treatment of respiratory diseases, due to the abnormal proliferation of fibroblasts, leading to lung tissue destruction. The diagnosis mainly depends on the early detection of imaging and early treatment, which can effectively prolong the life of patients. If the computer can be used to assist the examination results related to the effects of fibrosis, a timely diagnosis of such diseases will be of great value to both doctors and patients. We also previously proposed a machine learning algorithm model that can play a good clinical guiding role in early imaging prediction of idiopathic pulmonary fibrosis. At present, AI and machine learning have great potential and ability to transform many aspects of respiratory medicine and are the focus and hotspot of research. AI needs to become an invisible, seamless, and impartial auxiliary tool to help patients and doctors make better decisions in an efficient, effective, and acceptable way. The purpose of this paper is to review the current application of machine learning in various aspects of respiratory diseases, with the hope to provide some help and guidance for clinicians when applying algorithm models.
Collapse
Affiliation(s)
- Gerui Zhang
- Department of Critical Care Unit, The First Affiliated Hospital of Dalian Medical University, 222, Zhongshan Road, Dalian 116011, China
| | - Lin Luo
- Department of Critical Care Unit, The Second Hospital of Dalian Medical University, 467 Zhongshan Road, Shahekou District, Dalian 116023, China
| | - Limin Zhang
- Department of Respiratory, The First Affiliated Hospital of Dalian Medical University, 222, Zhongshan Road, Dalian 116011, China
| | - Zhuo Liu
- Department of Respiratory, The First Affiliated Hospital of Dalian Medical University, 222, Zhongshan Road, Dalian 116011, China
- Correspondence:
| |
Collapse
|
96
|
Maynord M, Farhangi MM, Fermüller C, Aloimonos Y, Levine G, Petrick N, Sahiner B, Pezeshk A. Semi-supervised training using cooperative labeling of weakly annotated data for nodule detection in chest CT. Med Phys 2023. [PMID: 36630691 DOI: 10.1002/mp.16219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Revised: 12/14/2022] [Accepted: 12/23/2022] [Indexed: 01/13/2023] Open
Abstract
PURPOSE Machine learning algorithms are best trained with large quantities of accurately annotated samples. While natural scene images can often be labeled relatively cheaply and at large scale, obtaining accurate annotations for medical images is both time consuming and expensive. In this study, we propose a cooperative labeling method that allows us to make use of weakly annotated medical imaging data for the training of a machine learning algorithm. As most clinically produced data are weakly-annotated - produced for use by humans rather than machines and lacking information machine learning depends upon - this approach allows us to incorporate a wider range of clinical data and thereby increase the training set size. METHODS Our pseudo-labeling method consists of multiple stages. In the first stage, a previously established network is trained using a limited number of samples with high-quality expert-produced annotations. This network is used to generate annotations for a separate larger dataset that contains only weakly annotated scans. In the second stage, by cross-checking the two types of annotations against each other, we obtain higher-fidelity annotations. In the third stage, we extract training data from the weakly annotated scans, and combine it with the fully annotated data, producing a larger training dataset. We use this larger dataset to develop a computer-aided detection (CADe) system for nodule detection in chest CT. RESULTS We evaluated the proposed approach by presenting the network with different numbers of expert-annotated scans in training and then testing the CADe using an independent expert-annotated dataset. We demonstrate that when availability of expert annotations is severely limited, the inclusion of weakly-labeled data leads to a 5% improvement in the competitive performance metric (CPM), defined as the average of sensitivities at different false-positive rates. CONCLUSIONS Our proposed approach can effectively merge a weakly-annotated dataset with a small, well-annotated dataset for algorithm training. This approach can help enlarge limited training data by leveraging the large amount of weakly labeled data typically generated in clinical image interpretation.
Collapse
Affiliation(s)
- Michael Maynord
- University of Maryland, Computer Science Department, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA.,Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - M Mehdi Farhangi
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Cornelia Fermüller
- University of Maryland, Institute for Advanced Computer Studies, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA
| | - Yiannis Aloimonos
- University of Maryland, Computer Science Department, Iribe Center for Computer Science and Engineering, College Park, Maryland, USA
| | - Gary Levine
- Division of Radiological Imaging Devices and Electronic Products, CDRH, FDA, Silver Spring, Maryland, USA
| | - Nicholas Petrick
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Berkman Sahiner
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| | - Aria Pezeshk
- Division of Imaging, Diagnostics, and Software Reliability (DIDSR), OSEL, CDRH, FDA, Silver Spring, Maryland, USA
| |
Collapse
|
97
|
Zhang H, Chen L, Gu X, Zhang M, Qin Y, Yao F, Wang Z, Gu Y, Yang GZ. Trustworthy learning with (un)sure annotation for lung nodule diagnosis with CT. Med Image Anal 2023; 83:102627. [PMID: 36283199 DOI: 10.1016/j.media.2022.102627] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Revised: 07/22/2022] [Accepted: 09/10/2022] [Indexed: 02/04/2023]
Abstract
Recent evolution in deep learning has proven its value for CT-based lung nodule classification. Most current techniques are intrinsically black-box systems, suffering from two generalizability issues in clinical practice. First, benign-malignant discrimination is often assessed by human observers without pathologic diagnoses at the nodule level. We termed these data as "unsure-annotation data". Second, a classifier does not necessarily acquire reliable nodule features for stable learning and robust prediction with patch-level labels during learning. In this study, we construct a sure-annotation dataset with pathologically-confirmed labels and propose a collaborative learning framework to facilitate sure nodule classification by integrating unsure-annotation data knowledge through nodule segmentation and malignancy score regression. A loss function is designed to learn reliable features by introducing interpretability constraints regulated with nodule segmentation maps. Furthermore, based on model inference results that reflect the understanding from both machine and experts, we explore a new nodule analysis method for similar historical nodule retrieval and interpretable diagnosis. Detailed experimental results demonstrate that our approach is beneficial for achieving improved performance coupled with trustworthy model reasoning for lung cancer prediction with limited data. Extensive cross-evaluation results further illustrate the effect of unsure-annotation data for deep-learning based methods in lung nodule classification.
Collapse
Affiliation(s)
- Hanxiao Zhang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Liang Chen
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Xiao Gu
- Imperial College London, London, UK
| | - Minghui Zhang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | | | - Feng Yao
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
| | - Zhexin Wang
- Department of Thoracic Surgery, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China.
| | - Yun Gu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China; Shanghai Center for Brain Science and Brain-Inspired Technology, Shanghai, China.
| | - Guang-Zhong Yang
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
98
|
An Efficient Model for Lungs Nodule Classification Using Supervised Learning Technique. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:8262741. [PMID: 36785839 PMCID: PMC9922185 DOI: 10.1155/2023/8262741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 05/14/2022] [Accepted: 11/24/2022] [Indexed: 02/05/2023]
Abstract
Lung cancer has the highest death rate of any other cancer in the world. Detecting lung cancer early can increase a patient's survival rate. The corresponding work presents the method for improving the computer-aided detection (CAD) of nodules present in the lung area in computed tomography (CT) images. The main aim was to get an overview of the latest tools and technologies used: acquisition, storage, segmentation, classification, processing, and analysis of biomedical data. After the analysis, a model is proposed consisting of three main steps. In the first step, threshold values and component labeling of 3D components were used to segment the lung volume. In the second step, candidate nodules are identified and segmented with an optimal threshold value and rule-based trimming. It also selects 2D and 3D features from the candidate segmented node. In the final step, the selected features are used to train the SVM and classify the nodes and classify the non-nodes. To assess the performance of the proposed framework, experiments were performed on the LIDC data set. As a result, it was observed that the number of false positives in the nodule candidate was reduced to 4 FP per scan with a sensitivity of 95%.
Collapse
|
99
|
de Margerie-Mellon C, Chassagnon G. Artificial intelligence: A critical review of applications for lung nodule and lung cancer. Diagn Interv Imaging 2023; 104:11-17. [PMID: 36513593 DOI: 10.1016/j.diii.2022.11.007] [Citation(s) in RCA: 51] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 11/22/2022] [Indexed: 12/14/2022]
Abstract
Artificial intelligence (AI) is a broad concept that usually refers to computer programs that can learn from data and perform certain specific tasks. In the recent years, the growth of deep learning, a successful technique for computer vision tasks that does not require explicit programming, coupled with the availability of large imaging databases fostered the development of multiple applications in the medical imaging field, especially for lung nodules and lung cancer, mostly through convolutional neural networks (CNN). Some of the first applications of AI is this field were dedicated to automated detection of lung nodules on X-ray and computed tomography (CT) examinations, with performances now reaching or exceeding those of radiologists. For lung nodule segmentation, CNN-based algorithms applied to CT images show excellent spatial overlap index with manual segmentation, even for irregular and ground glass nodules. A third application of AI is the classification of lung nodules between malignant and benign, which could limit the number of follow-up CT examinations for less suspicious lesions. Several algorithms have demonstrated excellent capabilities for the prediction of the malignancy risk when a nodule is discovered. These different applications of AI for lung nodules are particularly appealing in the context of lung cancer screening. In the field of lung cancer, AI tools applied to lung imaging have been investigated for distinct aims. First, they could play a role for the non-invasive characterization of tumors, especially for histological subtype and somatic mutation predictions, with a potential therapeutic impact. Additionally, they could help predict the patient prognosis, in combination to clinical data. Despite these encouraging perspectives, clinical implementation of AI tools is only beginning because of the lack of generalizability of published studies, of an inner obscure working and because of limited data about the impact of such tools on the radiologists' decision and on the patient outcome. Radiologists must be active participants in the process of evaluating AI tools, as such tools could support their daily work and offer them more time for high added value tasks.
Collapse
Affiliation(s)
- Constance de Margerie-Mellon
- Université Paris Cité, Laboratory of Imaging Biomarkers, Center for Research on Inflammation, UMR 1149, INSERM, 75018 Paris, France; Department of Radiology, Hôpital Saint-Louis APHP, 75010 Paris, France
| | - Guillaume Chassagnon
- Université Paris Cité, Faculté de Médecine, 75006 Paris, France; Department of Radiology, Hôpital Cochin APHP, 75014 Paris, France
| |
Collapse
|
100
|
Chen X, Xie H, Li Z, Cheng G, Leng M, Wang FL. Information fusion and artificial intelligence for smart healthcare: a bibliometric study. Inf Process Manag 2023. [DOI: 10.1016/j.ipm.2022.103113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|