1
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:cancers14225569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
2
|
Benign-malignant classification of pulmonary nodule with deep feature optimization framework. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103701] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
3
|
Yang Z, Leng L, Li M, Chu J. A computer-aid multi-task light-weight network for macroscopic feces diagnosis. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:15671-15686. [PMID: 35250359 PMCID: PMC8884099 DOI: 10.1007/s11042-022-12565-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 06/15/2021] [Accepted: 01/31/2022] [Indexed: 06/14/2023]
Abstract
The abnormal traits and colors of feces typically indicate that the patients are probably suffering from tumor or digestive-system diseases. Thus a fast, accurate and automatic health diagnosis system based on feces is urgently necessary for improving the examination speed and reducing the infection risk. The rarity of the pathological images would deteriorate the accuracy performance of the trained models. In order to alleviate this problem, we employ augmentation and over-sampling to expand the samples of the classes that have few samples in the training batch. In order to achieve an impressive recognition performance and leverage the latent correlation between the traits and colors of feces pathological samples, a multi-task network is developed to recognize colors and traits of the macroscopic feces images. The parameter number of a single multi-task network is generally much smaller than the total parameter number of multiple single-task networks, so the storage cost is reduced. The loss function of the multi-task network is the weighted sum of the losses of the two tasks. In this paper, the weights of the tasks are determined according to their difficulty levels that are measured by the fitted linear functions. The sufficient experiments confirm that the proposed method can yield higher accuracies, and the efficiency is also improved.
Collapse
Affiliation(s)
- Ziyuan Yang
- School of Software, Nanchang Hangkong University, Nanchang, 330063 People’s Republic of China
- College of Computer Science, Sichuan University, Chengdu, 610065 People’s Republic of China
| | - Lu Leng
- School of Software, Nanchang Hangkong University, Nanchang, 330063 People’s Republic of China
- School of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, 120749 Republic of Korea
| | - Ming Li
- School of Information Engineering, Nanchang Hangkong University, Nanchang, 330063 People’s Republic of China
| | - Jun Chu
- School of Software, Nanchang Hangkong University, Nanchang, 330063 People’s Republic of China
| |
Collapse
|
4
|
Ghimire S, Yaseen ZM, Farooque AA, Deo RC, Zhang J, Tao X. Streamflow prediction using an integrated methodology based on convolutional neural network and long short-term memory networks. Sci Rep 2021; 11:17497. [PMID: 34471166 PMCID: PMC8410863 DOI: 10.1038/s41598-021-96751-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Accepted: 08/13/2021] [Indexed: 11/09/2022] Open
Abstract
Streamflow (Qflow) prediction is one of the essential steps for the reliable and robust water resources planning and management. It is highly vital for hydropower operation, agricultural planning, and flood control. In this study, the convolution neural network (CNN) and Long-Short-term Memory network (LSTM) are combined to make a new integrated model called CNN-LSTM to predict the hourly Qflow (short-term) at Brisbane River and Teewah Creek, Australia. The CNN layers were used to extract the features of Qflow time-series, while the LSTM networks use these features from CNN for Qflow time series prediction. The proposed CNN-LSTM model is benchmarked against the standalone model CNN, LSTM, and Deep Neural Network models and several conventional artificial intelligence (AI) models. Qflow prediction is conducted for different time intervals with the length of 1-Week, 2-Weeks, 4-Weeks, and 9-Months, respectively. With the help of different performance metrics and graphical analysis visualization, the experimental results reveal that with small residual error between the actual and predicted Qflow, the CNN-LSTM model outperforms all the benchmarked conventional AI models as well as ensemble models for all the time intervals. With 84% of Qflow prediction error below the range of 0.05 m3 s-1, CNN-LSTM demonstrates a better performance compared to 80% and 66% for LSTM and DNN, respectively. In summary, the results reveal that the proposed CNN-LSTM model based on the novel framework yields more accurate predictions. Thus, CNN-LSTM has significant practical value in Qflow prediction.
Collapse
Affiliation(s)
- Sujan Ghimire
- School of Sciences, University of Southern Queensland, Toowoomba, QLD, 4350, Australia
| | - Zaher Mundher Yaseen
- New era and development in civil engineering research group, Scientific Research Center, Al-Ayen University, Thi-Qar, 64001, Iraq.
- College of Creative Design, Asia University, Taichung City, Taiwan.
| | - Aitazaz A Farooque
- Faculty of Sustainable Design Engineering, University of Prince Edward Island, Charlottetown, PE, C1A4P3, Canada
| | - Ravinesh C Deo
- School of Sciences, University of Southern Queensland, Toowoomba, QLD, 4350, Australia
| | - Ji Zhang
- School of Sciences, University of Southern Queensland, Toowoomba, QLD, 4350, Australia
| | - Xiaohui Tao
- School of Sciences, University of Southern Queensland, Toowoomba, QLD, 4350, Australia
| |
Collapse
|
5
|
Feng S, Liu B, Zhang Y, Zhang X, Li Y. Two-Stream Compare and Contrast Network for Vertebral Compression Fracture Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2496-2506. [PMID: 33999815 DOI: 10.1109/tmi.2021.3080991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Differentiating Vertebral Compression Fractures (VCFs) associated with trauma and osteoporosis (benign VCFs) or those caused by metastatic cancer (malignant VCFs) is critically important for treatment decisions. So far, automatic VCFs diagnosis is solved in a two-step manner, i.e., first identify VCFs and then classify them into benign or malignant. In this paper, we explore to model VCFs diagnosis as a three-class classification problem, i.e., normal vertebrae, benign VCFs, and malignant VCFs. However, VCFs recognition and classification require very different features, and both tasks are characterized by high intra-class variation and high inter-class similarity. Moreover, the dataset is extremely class-imbalanced. To address the above challenges, we propose a novel Two-Stream Compare and Contrast Network (TSCCN) for VCFs diagnosis. This network consists of two streams, a recognition stream which learns to identify VCFs through comparing and contrasting between adjacent vertebrae, and a classification stream which compares and contrasts between intra-class and inter-class to learn features for fine-grained classification. The two streams are integrated via a learnable weight control module which adaptively sets their contribution. TSCCN is evaluated on a dataset consisting of 239 VCFs patients and achieves the average sensitivity and specificity of 92.56% and 96.29%, respectively.
Collapse
|
6
|
Zhao Y, Ma J, Peng Z, Xia H, Wan H. Pulmonary Nodule Detection Based on Three-Dimensional Multiscale Convolutional Neural Network with Channel and Spatial Attention. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Early screening for pulmonary nodules is currently an important means for reducing lung cancer mortality. In recent years, three-dimensional convolutional neural networks have achieved great success in the field of pulmonary nodule detection. This paper proposes a pulmonary nodule detection
method based on a threedimensional multiscale convolutional neural network with channel and spatial attention. First, a multiscale module is designed to extract the image features at different scales. Second, a channel and spatial attention module is designed to mine the correlation information
between features from the perspective of space and channel. Then the extracted features are sent to a pyramid-like fusion mechanism, so that the features contain both deep semantic information and shallow position information, which is conducive to object positioning and bounding box regression.
In general, the experiments on the LUng Nodule Analysis 2016 (LUNA16) dataset show that the average free-response receiver operating characteristic (FROC) score is 0.846. Compared with other current advanced methods, the method is competitive and effective.
Collapse
Affiliation(s)
- Yudu Zhao
- Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| | - Jun Ma
- Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| | - Zhenwei Peng
- Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| | - Hao Xia
- Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| | - Honglin Wan
- Key Laboratory of Medical Physics and Image Processing, School of Physics and Electronics, Shandong Normal University, Jinan 250358, China
| |
Collapse
|
7
|
Using 2D CNN with Taguchi Parametric Optimization for Lung Cancer Recognition from CT Images. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10072591] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Lung cancer is one of the common causes of cancer deaths. Early detection and treatment of lung cancer is essential. However, the detection of lung cancer in patients produces many false positives. Therefore, increasing the accuracy of the classification of diagnosis or true detection by computed tomography (CT) is a difficult task. Solving this problem using intelligent and automated methods has become a hot research topic in recent years. Hence, we propose a 2D convolutional neural network (2D CNN) with Taguchi parametric optimization for automatically recognizing lung cancer from CT images. In the Taguchi method, 36 experiments and 8 control factors of mixed levels were selected to determine the optimum parameters of the 2D CNN architecture and improve the classification accuracy of lung cancer. The experimental results show that the average classification accuracy of the 2D CNN with Taguchi parameter optimization and the original 2D CNN in lung cancer recognition are 91.97% and 98.83% on the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset, and 94.68% and 99.97% on the International Society for Optics and Photonics with the support of the American Association of Physicists in Medicine (SPIE-AAPM) dataset, respectively. The proposed method is 6.86% and 5.29% more accurate than the original 2D CNN on the two datasets, respectively, proving the superiority of proposed model.
Collapse
|
8
|
Agnes SA, Anitha J. Appraisal of Deep-Learning Techniques on Computer-Aided Lung Cancer Diagnosis with Computed Tomography Screening. J Med Phys 2020; 45:98-106. [PMID: 32831492 PMCID: PMC7416858 DOI: 10.4103/jmp.jmp_101_19] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 03/03/2020] [Accepted: 03/27/2020] [Indexed: 12/19/2022] Open
Abstract
Aims: Deep-learning methods are becoming versatile in the field of medical image analysis. The hand-operated examination of smaller nodules from computed tomography scans becomes a challenging and time-consuming task due to the limitation of human vision. A standardized computer-aided diagnosis (CAD) framework is required for rapid and accurate lung cancer diagnosis. The National Lung Screening Trial recommends routine screening with low-dose computed tomography among high-risk patients to reduce the risk of dying from lung cancer by early cancer detection. The evolvement of clinically acceptable CAD system for lung cancer diagnosis demands perfect prototypes for segmenting lung region, followed by identifying nodules with reduced false positives. Recently, deep-learning methods are increasingly adopted in medical image diagnosis applications. Subjects and Methods: In this study, a deep-learning-based CAD framework for lung cancer diagnosis with chest computed tomography (CT) images is built using dilated SegNet and convolutional neural networks (CNNs). A dilated SegNet model is employed to segment lung from chest CT images, and a CNN model with batch normalization is developed to identify the true nodules from all possible nodules. The dilated SegNet and CNN models have been trained on the sample cases taken from the LUNA16 dataset. The performance of the segmentation model is measured in terms of Dice coefficient, and the nodule classifier is evaluated with sensitivity. The discriminant ability of the features learned by a CNN classifier is further confirmed with principal component analysis. Results: Experimental results confirm that the dilated SegNet model segments the lung with an average Dice coefficient of 0.89 ± 0.23 and the customized CNN model yields a sensitivity of 94.8 on categorizing cancerous and noncancerous nodules. Conclusions: Thus, the proposed CNN models achieve efficient lung segmentation and two-dimensional nodule patch classification in CAD system for lung cancer diagnosis with CT screening.
Collapse
Affiliation(s)
- S Akila Agnes
- Department of CSE, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| | - J Anitha
- Department of CSE, Karunya Institute of Technology and Sciences, Coimbatore, Tamil Nadu, India
| |
Collapse
|