1
|
Liu Q, Zheng H, Jia Z, Shi Z. Tumor detection on bronchoscopic images by unsupervised learning. Sci Rep 2025; 15:245. [PMID: 39747936 PMCID: PMC11696192 DOI: 10.1038/s41598-024-81786-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Accepted: 11/28/2024] [Indexed: 01/04/2025] Open
Abstract
The diagnosis and early identification of intratracheal tumors relies on the experience of the operators and the specialists. Operations by physicians with insufficient experience may lead to misdiagnosis or misjudgment of tumors. To address this issue, a datasets for intratracheal tumor detection has been constructed to simulate the diagnostic level of experienced specialists, and a Knowledge Distillation-based Memory Feature Unsupervised Anomaly Detection (KD-MFAD) model was proposed to learn from this simulated experience. The unsupervised training approach could effectively deal with the irregular features of the tumorous appearance. The Downward Deformable Convolution Module (DDC) allowed the encoding phase to provide more detailed internal airway environment features. The Memory Matrix based on Convolutional Block focusing (CB-Mem) helped the student model store more meaningful normal sample features during training and disrupted the reconstruction of "tumor" images. Our model achieved an AUC-ROC of 97.60%, Acc of 93.33%, and F1-score of 94.94% on our self-built intratracheal endoscopy datasets, improving baseline performance by 5 to 10%. Our model also demonstrated superior performance over existing models in the public datasets in the same field.
Collapse
Affiliation(s)
- Qingqing Liu
- Department of Pulmonary and Critical Care Medicine, The Second Xiangya Hospital, Central South University, Changsha, 410011, Hunan, China
- Research Unit of Respiratory Disease, Central South University, Changsha, 410011, Hunan, China
- Clinical Medical Research Center for Pulmonary and Critical Care Medicine in Hunan Province, Changsha, 410011, Hunan, China
- Diagnosis and Treatment Center of Respiratory Disease, Central South University, Changsha, 410011, Hunan, China
| | - Haoliang Zheng
- School of Electrical and Information Engineering, Changsha University of Science and Technology, Changsha, 410114, Hunan, China
| | - Zhiwei Jia
- School of Electrical and Information Engineering, Changsha University of Science and Technology, Changsha, 410114, Hunan, China
| | - Zhihui Shi
- Department of Pulmonary and Critical Care Medicine, The Second Xiangya Hospital, Central South University, Changsha, 410011, Hunan, China.
- Research Unit of Respiratory Disease, Central South University, Changsha, 410011, Hunan, China.
- Clinical Medical Research Center for Pulmonary and Critical Care Medicine in Hunan Province, Changsha, 410011, Hunan, China.
- Diagnosis and Treatment Center of Respiratory Disease, Central South University, Changsha, 410011, Hunan, China.
| |
Collapse
|
2
|
Kancherla R, Sharma A, Garg P. Diagnosing Respiratory Variability: Convolutional Neural Networks for Chest X-ray Classification Across Diverse Pulmonary Conditions. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01355-9. [PMID: 39673008 DOI: 10.1007/s10278-024-01355-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2024] [Revised: 11/18/2024] [Accepted: 11/21/2024] [Indexed: 12/15/2024]
Abstract
The global burden of lung diseases is a pressing issue, particularly in developing nations with limited healthcare access. Accurate diagnosis of lung conditions is crucial for effective treatment, but diagnosing lung ailments using medical imaging techniques like chest radiograph images and CT scans is challenging due to the complex anatomical intricacies of the lungs. Deep learning methods, particularly convolutional neural networks (CNN), offer promising solutions for automated disease classification using imaging data. This research has the potential to significantly improve healthcare access in developing countries with limited medical resources, providing hope for better diagnosis and treatment of lung diseases. The study employed a diverse range of CNN models for training, including a baseline model and transfer learning models such as VGG16, VGG19, InceptionV3, and ResNet50. The models were trained using image datasets sourced from the NIH and COVID-19 repositories containing 8000 chest radiograph images depicting four lung conditions (lung opacity, COVID-19, pneumonia, and pneumothorax) and 2000 healthy chest radiograph images, with a ten-fold cross-validation approach. The VGG19-based model outperformed the baseline model in diagnosing lung diseases with an average accuracy of 0.995 and 0.996 on validation and external test datasets. The proposed model also outperformed published lung-disease prediction models; these findings underscore the superior performance of the VGG19 model compared to other architectures in accurately classifying and detecting lung diseases from chest radiograph images. This study highlights AI's potential, especially CNNs like VGG19, in improving diagnostic accuracy for lung disorders, promising better healthcare outcomes. The predictive model is available on GitHub at https://github.com/PGlab-NIPER/Lung_disease_classification .
Collapse
Affiliation(s)
- Rajesh Kancherla
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S. A. S. Nagar, Punjab, 160062, India
| | - Anju Sharma
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S. A. S. Nagar, Punjab, 160062, India
| | - Prabha Garg
- Department of Pharmacoinformatics, National Institute of Pharmaceutical Education and Research, S. A. S. Nagar, Punjab, 160062, India.
| |
Collapse
|
3
|
Yang H, Song Y, Li Y, Hong Z, Liu J, Li J, Zhang D, Fu L, Lu J, Qiu L. A Dual-Branch Residual Network with Attention Mechanisms for Enhanced Classification of Vaginal Lesions in Colposcopic Images. Bioengineering (Basel) 2024; 11:1182. [PMID: 39768001 PMCID: PMC11673476 DOI: 10.3390/bioengineering11121182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2024] [Revised: 11/15/2024] [Accepted: 11/20/2024] [Indexed: 01/11/2025] Open
Abstract
Vaginal intraepithelial neoplasia (VAIN), linked to HPV infection, is a condition that is often overlooked during colposcopy, especially in the vaginal vault area, as clinicians tend to focus more on cervical lesions. This oversight can lead to missed or delayed diagnosis and treatment for patients with VAIN. Timely and accurate classification of VAIN plays a crucial role in the evaluation of vaginal lesions and the formulation of effective diagnostic approaches. The challenge is the high similarity between different classes and the low variability in the same class in colposcopic images, which can affect the accuracy, precision, and recall rates, depending on the image quality and the clinician's experience. In this study, a dual-branch lesion-aware residual network (DLRNet), designed for small medical sample sizes, is introduced, which classifies vaginal lesions by examining the relationship between cervical and vaginal lesions. The DLRNet model includes four main components: a lesion localization module, a dual-branch classification module, an attention-guidance module, and a pretrained network module. The dual-branch classification module combines the original images with segmentation maps obtained from the lesion localization module using a pretrained ResNet network to fine-tune parameters at different levels, explore lesion-specific features from both global and local perspectives, and facilitate layered interactions. The feature guidance module focuses the local branch network on vaginal-specific features by using spatial and channel attention mechanisms. The final integration involves a shared feature extraction module and independent fully connected layers, which represent and merge the dual-branch inputs. The weighted fusion method effectively integrates multiple inputs, enhancing the discriminative and generalization capabilities of the model. Classification experiments on 1142 collected colposcopic images demonstrate that this method raises the existing classification levels, achieving the classification of VAIN into three lesion grades, thus providing a valuable tool for the early screening of vaginal diseases.
Collapse
Affiliation(s)
- Haima Yang
- School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
- Key Laboratory of Space Active Opto-Electronics Technology, Chinese Academy of Sciences, Shanghai 200083, China
| | - Yeye Song
- School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Yuling Li
- Department of Obstetrics and Gynecology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
- Shanghai Key Laboratory of Gynecologic Oncology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
- Department of Obstetrics and Gynecology, Shanxi Bethune Hospital, Taiyuan 050081, China
| | - Zubei Hong
- Department of Obstetrics and Gynecology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
- Shanghai Key Laboratory of Gynecologic Oncology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
| | - Jin Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China
| | - Jun Li
- School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Dawei Zhang
- School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Le Fu
- Shanghai First Maternity and Infant Hospital, School of Medicine, Tongji University, Shanghai 200092, China
| | - Jinyu Lu
- School of Optical Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
| | - Lihua Qiu
- Department of Obstetrics and Gynecology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
- Shanghai Key Laboratory of Gynecologic Oncology, Ren Ji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200030, China
| |
Collapse
|
4
|
Sun W, Yan P, Li M, Li X, Jiang Y, Luo H, Zhao Y. An accurate prediction for respiratory diseases using deep learning on bronchoscopy diagnosis images. J Adv Res 2024:S2090-1232(24)00542-3. [PMID: 39571731 DOI: 10.1016/j.jare.2024.11.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Revised: 11/13/2024] [Accepted: 11/14/2024] [Indexed: 11/25/2024] Open
Abstract
INTRODUCTION Bronchoscopy is of great significance in diagnosing and treating respiratory illness. Using deep learning, a diagnostic system for bronchoscopy images can improve the accuracy of tracheal, bronchial, and pulmonary disease diagnoses for physicians and ensure timely pathological or etiological examinations for patients. Improving the diagnostic accuracy of the algorithms remains the key to this technology. OBJECTIVES To deal with the problem, we proposed a multiscale attention residual network (MARN) for diagnosing lung conditions through bronchoscopic images. The multiscale convolutional block attention module (MCBAM) was designed to enable accurate focus on lesion regions by enhancing spatial and channel features. Gradient-weighted Class Activation Map (Grad-CAM) was provided to increase the interpretability of diagnostic results. METHODS We collected 615 cases from Harbin Medical University Cancer Hospital, including 2900 images. The dataset was partitioned randomly into training sets, validation sets and test sets to update model parameters, evaluate the model's training performance, select network architecture and parameters, and estimate the final model. In addition, we compared MARN with other algorithms. Furthermore, three physicians with different qualifications were invited to diagnose the same test images, and the results were compared to those of the model. RESULTS In the dataset of normal and lesion images, our model displayed an accuracy of 97.76% and an AUC of 99.79%. The model recorded 92.26% accuracy and 96.82% AUC for datasets of benign and malignant lesion images, while it achieved 93.10% accuracy and 99.02% AUC for normal, benign, and malignant lesion images. CONCLUSION These results demonstrated that our network outperforms other methods in diagnostic performance. The accuracy of our model is roughly the same as that of experienced physicians and the efficiency is much higher than doctors. MARN has great potential for assisting physicians with assessing the bronchoscopic images precisely.
Collapse
Affiliation(s)
- Weiling Sun
- Department of Medical Oncology, Harbin Medical University Cancer Hospital, Harbin 150040, China; Department of Endoscope, Harbin Medical University Cancer Hospital, Harbin 150040, China
| | - Pengfei Yan
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China.
| | - Yanbin Zhao
- Department of Medical Oncology, Harbin Medical University Cancer Hospital, Harbin 150040, China.
| |
Collapse
|
5
|
Li L, Pan C, Zhang M, Shen D, He G, Meng M. Predicting malignancy in breast lesions: enhancing accuracy with fine-tuned convolutional neural network models. BMC Med Imaging 2024; 24:303. [PMID: 39529003 PMCID: PMC11552211 DOI: 10.1186/s12880-024-01484-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024] Open
Abstract
BACKGROUND This study aims to explore the accuracy of Convolutional Neural Network (CNN) models in predicting malignancy in Dynamic Contrast-Enhanced Breast Magnetic Resonance Imaging (DCE-BMRI). METHODS A total of 273 benign lesions (benign group) and 274 malignant lesions (malignant group) were collected and randomly divided into a training set (246 benign and 245 malignant lesions) and a testing set (28 benign and 28 malignant lesions) in a 9:1 ratio. An additional 53 lesions from 53 patients were designated as the validation set. Five models-VGG16, VGG19, DenseNet201, ResNet50, and MobileNetV2-were evaluated. Model performance was assessed using accuracy (Ac) in the training and testing sets, and precision (Pr), recall (Rc), F1 score (F1), and area under the receiver operating characteristic curve (AUC) in the validation set. RESULTS The accuracy of VGG19 on the test set (0.96) is higher than that of VGG16 (0.91), DenseNet201 (0.91), ResNet50 (0.67), and MobileNetV2 (0.88). For the validation set, VGG19 achieved higher performance metrics (Pr 0.75, Rc 0.76, F1 0.73, AUC 0.76) compared to the other models, specifically VGG16 (Pr 0.73, Rc 0.75, F1 0.70, AUC 0.73), DenseNet201 (Pr 0.71, Rc 0.74, F1 0.69, AUC 0.71), ResNet50 (Pr 0.65, Rc 0.68, F1 0.60, AUC 0.65), and MobileNetV2 (Pr 0.73, Rc 0.75, F1 0.71, AUC 0.73). S4 model achieved higher performance metrics (Pr 0.89, Rc 0.88, F1 0.87, AUC 0.89) compared to the other four fine-tuned models, specifically S1 (Pr 0.75, Rc 0.76, F1 0.74, AUC 0.75), S2 (Pr 0.77, Rc 0.79, F1 0.75, AUC 0.77), S3 (Pr 0.76, Rc 0.76, F1 0.73, AUC 0.75), and S5 (Pr 0.77, Rc 0.79, F1 0.75, AUC 0.77). Additionally, S4 model showed the lowest loss value in the testing set. Notably, the AUC of S4 for BI-RADS 3 was 0.90 and for BI-RADS 4 was 0.86, both significantly higher than the 0.65 AUC for BI-RADS 5. CONCLUSIONS The S4 model we propose has demonstrated superior performance in predicting the likelihood of malignancy in DCE-BMRI, making it a promising candidate for clinical application in patients with breast diseases. However, further validation is essential, highlighting the need for additional data to confirm its efficacy.
Collapse
Affiliation(s)
- Li Li
- Department of Radiology, The Affiliated Changzhou No.2 People's Hospital of Nanjing Medical University, Changzhou, 213164, China
| | - Changjie Pan
- Department of Radiology, The Affiliated Changzhou No.2 People's Hospital of Nanjing Medical University, Changzhou, 213164, China
| | - Ming Zhang
- Department of Radiology, The Affiliated Changzhou No.2 People's Hospital of Nanjing Medical University, Changzhou, 213164, China
| | - Dong Shen
- Department of Radiology, The Affiliated Changzhou No.2 People's Hospital of Nanjing Medical University, Changzhou, 213164, China
| | - Guangyuan He
- Department of Radiology, The Affiliated Changzhou No.2 People's Hospital of Nanjing Medical University, Changzhou, 213164, China
| | - Mingzhu Meng
- Department of Radiology, The Affiliated Changzhou No.2 People's Hospital of Nanjing Medical University, Changzhou, 213164, China.
| |
Collapse
|
6
|
Kang C, Kang SU. Deep Transfer Learning Method Using Self-Pixel and Global Channel Attentive Regularization. SENSORS (BASEL, SWITZERLAND) 2024; 24:3522. [PMID: 38894313 PMCID: PMC11175273 DOI: 10.3390/s24113522] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 05/25/2024] [Accepted: 05/26/2024] [Indexed: 06/21/2024]
Abstract
The purpose of this paper is to propose a novel transfer learning regularization method based on knowledge distillation. Recently, transfer learning methods have been used in various fields. However, problems such as knowledge loss still occur during the process of transfer learning to a new target dataset. To solve these problems, there are various regularization methods based on knowledge distillation techniques. In this paper, we propose a transfer learning regularization method based on feature map alignment used in the field of knowledge distillation. The proposed method is composed of two attention-based submodules: self-pixel attention (SPA) and global channel attention (GCA). The self-pixel attention submodule utilizes both the feature maps of the source and target models, so that it provides an opportunity to jointly consider the features of the target and the knowledge of the source. The global channel attention submodule determines the importance of channels through all layers, unlike the existing methods that calculate these only within a single layer. Accordingly, transfer learning regularization is performed by considering both the interior of each single layer and the depth of the entire layer. Consequently, the proposed method using both of these submodules showed overall improved classification accuracy than the existing methods in classification experiments on commonly used datasets.
Collapse
Affiliation(s)
| | - Sang-ug Kang
- Department of Computer Science, Sangmyung University, Seoul 03016, Republic of Korea;
| |
Collapse
|
7
|
Elazab N, Gab-Allah WA, Elmogy M. A multi-class brain tumor grading system based on histopathological images using a hybrid YOLO and RESNET networks. Sci Rep 2024; 14:4584. [PMID: 38403597 PMCID: PMC10894864 DOI: 10.1038/s41598-024-54864-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2023] [Accepted: 02/17/2024] [Indexed: 02/27/2024] Open
Abstract
Gliomas are primary brain tumors caused by glial cells. These cancers' classification and grading are crucial for prognosis and treatment planning. Deep learning (DL) can potentially improve the digital pathology investigation of brain tumors. In this paper, we developed a technique for visualizing a predictive tumor grading model on histopathology pictures to help guide doctors by emphasizing characteristics and heterogeneity in forecasts. The proposed technique is a hybrid model based on YOLOv5 and ResNet50. The function of YOLOv5 is to localize and classify the tumor in large histopathological whole slide images (WSIs). The suggested technique incorporates ResNet into the feature extraction of the YOLOv5 framework, and the detection results show that our hybrid network is effective for identifying brain tumors from histopathological images. Next, we estimate the glioma grades using the extreme gradient boosting classifier. The high-dimensional characteristics and nonlinear interactions present in histopathology images are well-handled by this classifier. DL techniques have been used in previous computer-aided diagnosis systems for brain tumor diagnosis. However, by combining the YOLOv5 and ResNet50 architectures into a hybrid model specifically designed for accurate tumor localization and predictive grading within histopathological WSIs, our study presents a new approach that advances the field. By utilizing the advantages of both models, this creative integration goes beyond traditional techniques to produce improved tumor localization accuracy and thorough feature extraction. Additionally, our method ensures stable training dynamics and strong model performance by integrating ResNet50 into the YOLOv5 framework, addressing concerns about gradient explosion. The proposed technique is tested using the cancer genome atlas dataset. During the experiments, our model outperforms the other standard ways on the same dataset. Our results indicate that the proposed hybrid model substantially impacts tumor subtype discrimination between low-grade glioma (LGG) II and LGG III. With 97.2% of accuracy, 97.8% of precision, 98.6% of sensitivity, and the Dice similarity coefficient of 97%, the proposed model performs well in classifying four grades. These results outperform current approaches for identifying LGG from high-grade glioma and provide competitive performance in classifying four categories of glioma in the literature.
Collapse
Affiliation(s)
- Naira Elazab
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Wael A Gab-Allah
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt
| | - Mohammed Elmogy
- Information Technology Department, Faculty of Computers and Information, Mansoura University, Mansoura, 35516, Egypt.
| |
Collapse
|
8
|
Yan P, Sun W, Li X, Li M, Jiang Y, Luo H. PKDN: Prior Knowledge Distillation Network for bronchoscopy diagnosis. Comput Biol Med 2023; 166:107486. [PMID: 37757599 DOI: 10.1016/j.compbiomed.2023.107486] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 08/15/2023] [Accepted: 09/15/2023] [Indexed: 09/29/2023]
Abstract
Bronchoscopy plays a crucial role in diagnosing and treating lung diseases. The deep learning-based diagnostic system for bronchoscopic images can assist physicians in accurately and efficiently diagnosing lung diseases, enabling patients to undergo timely pathological examinations and receive appropriate treatment. However, the existing diagnostic methods overlook the utilization of prior knowledge of medical images, and the limited feature extraction capability hinders precise focus on lesion regions, consequently affecting the overall diagnostic effectiveness. To address these challenges, this paper proposes a prior knowledge distillation network (PKDN) for identifying lung diseases through bronchoscopic images. The proposed method extracts color and edge features from lesion images using the prior knowledge guidance module, and subsequently enhances spatial and channel features by employing the dynamic spatial attention module and gated channel attention module, respectively. Finally, the extracted features undergo refinement and self-regulation through feature distillation. Furthermore, decoupled distillation is implemented to balance the importance of target and non-target class distillation, thereby enhancing the diagnostic performance of the network. The effectiveness of the proposed method is validated on the bronchoscopic dataset provided by Harbin Medical University Cancer Hospital, which consists of 2,029 bronchoscopic images from 200 patients. Experimental results demonstrate that the proposed method achieves an accuracy of 94.78% and an AUC of 98.17%, outperforming other methods significantly in diagnostic performance. These results indicate that the computer-aided diagnostic system based on PKDN provides satisfactory accuracy in diagnosing lung diseases during bronchoscopy.
Collapse
Affiliation(s)
- Pengfei Yan
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Weiling Sun
- Department of Endoscope, Harbin Medical University Cancer Hospital, Harbin 150040, China
| | - Xiang Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Minglei Li
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Yuchen Jiang
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China
| | - Hao Luo
- Department of Control Science and Engineering, Harbin Institute of Technology, Harbin 150001, China.
| |
Collapse
|
9
|
Chang YH, Lin MY, Hsieh MT, Ou MC, Huang CR, Sheu BS. Multiple Field-of-View Based Attention Driven Network for Weakly Supervised Common Bile Duct Stone Detection. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2023; 11:394-404. [PMID: 37465459 PMCID: PMC10351611 DOI: 10.1109/jtehm.2023.3286423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 03/14/2023] [Accepted: 06/08/2023] [Indexed: 07/20/2023]
Abstract
OBJECTIVE Common bile duct (CBD) stones caused diseases are life-threatening. Because CBD stones locate in the distal part of the CBD and have relatively small sizes, detecting CBD stones from CT scans is a challenging issue in the medical domain. METHODS AND PROCEDURES We propose a deep learning based weakly-supervised method called multiple field-of-view based attention driven network (MFADNet) to detect CBD stones from CT scans based on image-level labels. Three dominant modules including a multiple field-of-view encoder, an attention driven decoder and a classification network are collaborated in the network. The encoder learns the feature of multi-scale contextual information while the decoder with the classification network is applied to locate the CBD stones based on spatial-channel attentions. To drive the learning of the whole network in a weakly-supervised and end-to-end trainable manner, four losses including the foreground loss, background loss, consistency loss and classification loss are proposed. RESULTS Compared with state-of-the-art weakly-supervised methods in the experiments, the proposed method can accurately classify and locate CBD stones based on the quantitative and qualitative results. CONCLUSION We propose a novel multiple field-of-view based attention driven network for a new medical application of CBD stone detection from CT scans while only image-levels are required to reduce the burdens of labeling and help physicians automatically diagnose CBD stones. The source code is available at https://github.com/nchucvml/MFADNet after acceptance. CLINICAL IMPACT Our deep learning method can help physicians localize relatively small CBD stones for effectively diagnosing CBD stone caused diseases.
Collapse
Affiliation(s)
- Ya-Han Chang
- Department of Computer Science and EngineeringNational Chung Hsing UniversityTaichung402202Taiwan
| | - Meng-Ying Lin
- Department of Internal MedicineNational Cheng Kung University Hospital, College of Medicine, National Cheng Kung UniversityTainan701401Taiwan
| | - Ming-Tsung Hsieh
- Department of Internal MedicineNational Cheng Kung University Hospital, College of Medicine, National Cheng Kung UniversityTainan701401Taiwan
| | - Ming-Ching Ou
- Department of Medical ImageNational Cheng Kung University Hospital, College of Medicine, National Cheng Kung UniversityTainan701401Taiwan
| | - Chun-Rong Huang
- Department of Computer Science and EngineeringNational Chung Hsing UniversityTaichung402202Taiwan
- Cross College Elite Program, and Academy of Innovative Semiconductor and Sustainable ManufacturingNational Cheng Kung UniversityTainan701401Taiwan
| | - Bor-Shyang Sheu
- Department of Internal MedicineNational Cheng Kung University Hospital, College of Medicine, National Cheng Kung UniversityTainan701401Taiwan
| |
Collapse
|
10
|
Mridha MF, Prodeep AR, Hoque ASMM, Islam MR, Lima AA, Kabir MM, Hamid MA, Watanobe Y. A Comprehensive Survey on the Progress, Process, and Challenges of Lung Cancer Detection and Classification. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:5905230. [PMID: 36569180 PMCID: PMC9788902 DOI: 10.1155/2022/5905230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 10/17/2022] [Accepted: 11/09/2022] [Indexed: 12/23/2022]
Abstract
Lung cancer is the primary reason of cancer deaths worldwide, and the percentage of death rate is increasing step by step. There are chances of recovering from lung cancer by detecting it early. In any case, because the number of radiologists is limited and they have been working overtime, the increase in image data makes it hard for them to evaluate the images accurately. As a result, many researchers have come up with automated ways to predict the growth of cancer cells using medical imaging methods in a quick and accurate way. Previously, a lot of work was done on computer-aided detection (CADe) and computer-aided diagnosis (CADx) in computed tomography (CT) scan, magnetic resonance imaging (MRI), and X-ray with the goal of effective detection and segmentation of pulmonary nodule, as well as classifying nodules as malignant or benign. But still, no complete comprehensive review that includes all aspects of lung cancer has been done. In this paper, every aspect of lung cancer is discussed in detail, including datasets, image preprocessing, segmentation methods, optimal feature extraction and selection methods, evaluation measurement matrices, and classifiers. Finally, the study looks into several lung cancer-related issues with possible solutions.
Collapse
Affiliation(s)
- M. F. Mridha
- Department of Computer Science and Engineering, American International University Bangladesh, Dhaka 1229, Bangladesh
| | - Akibur Rahman Prodeep
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - A. S. M. Morshedul Hoque
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh
| | - Md. Abdul Hamid
- Department of Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizuwakamatsu 965-8580, Japan
| |
Collapse
|
11
|
Zang Q, Cui H, Guo X, Lu Y, Zou Z, Liu H. Clinical value of video-assisted single-lumen endotracheal intubation and application of artificial intelligence in it. Am J Transl Res 2022; 14:7643-7652. [PMID: 36505300 PMCID: PMC9730106] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Accepted: 10/19/2022] [Indexed: 12/15/2022]
Abstract
Visualization techniques and artificial intelligence (AI) are currently used for intubation device. By providing airway visualization during tracheal intubation, the technologies provide safe and accurate access to the trachea. The ability of AI to automatically identify airways from images of intubation device makes it attractive for use in intubation devices. The purpose of this review is to introduce the state of application of visualization techniques and AI in certain intubation devices. We reviewed the evidence of clinical implications of the use of video-assisted intubation device in the intubation time, first attempt success rate, and intubation of the difficult airway. Especially, VivaSight single-lumen tube with an incorporated optics allows direct viewing of the airway. VivaSight single-lumen tube has more advantages in tracheal intubation. AI has been applied to fiberoptic bronchoscopy (FOB) and video laryngoscope with automatic airway image recognition, and has achieved certain accomplishment. Further, we discussed the possibility of applying AI to the VivaSight single-lumen tube and proposed future directions of research and application.
Collapse
Affiliation(s)
- Qinglai Zang
- Shanghai Institute for Minimally Invasive Therapy, University of Shanghai for Science and TechnologyShanghai 200093, PR China
| | - Haipo Cui
- Shanghai Institute for Minimally Invasive Therapy, University of Shanghai for Science and TechnologyShanghai 200093, PR China
| | - Xudong Guo
- Shanghai Institute for Minimally Invasive Therapy, University of Shanghai for Science and TechnologyShanghai 200093, PR China
| | - Yingxi Lu
- Shanghai Institute for Minimally Invasive Therapy, University of Shanghai for Science and TechnologyShanghai 200093, PR China
| | - Zui Zou
- School of Anesthesiology, Naval Medical UniversityShanghai 200433, PR China
| | - Hong Liu
- Information Center, The Second Affiliated Hospital of Naval Medical UniversityNo. 415, Fengyang Road, Huangpu District, Shanghai 200003, PR China
| |
Collapse
|
12
|
Meng M, Zhang M, Shen D, He G. Differentiation of breast lesions on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) using deep transfer learning based on DenseNet201. Medicine (Baltimore) 2022; 101:e31214. [PMID: 36397422 PMCID: PMC9666147 DOI: 10.1097/md.0000000000031214] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
In order to achieve better performance, artificial intelligence is used in breast cancer diagnosis. In this study, we evaluated the efficacy of different fine-tuning strategies of deep transfer learning (DTL) based on the DenseNet201 model to differentiate malignant from benign lesions on breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We collected 4260 images of benign lesions and 4140 images of malignant lesions of the breast pertaining to pathologically confirmed cases. The benign and malignant groups was randomly divided into a training set and a testing set at a ratio of 9:1. A DTL model based on the DenseNet201 model was established, and the effectiveness of 4 fine-tuning strategies (S0: strategy 0, S1: strategy; S2: strategy; and S3: strategy) was compared. Additionally, DCE-MRI images of 48 breast lesions were selected to verify the robustness of the model. Ten images were obtained for each lesion. The classification was considered correct if more than 5 images were correctly classified. The metrics for model performance evaluation included accuracy (Ac) in the training and testing sets, precision (Pr), recall rate (Rc), f1 score (f1), and area under the receiver operating characteristic curve (AUROC) in the validation set. The Ac of the 4 fine-tuning strategies reached 100.00% in the training set. The S2 strategy exhibited good convergence in the testing set. The Ac of S2 was 98.01% in the testing set, which was higher than those of S0 (93.10%), S1 (90.45%), and S3 (93.90%). The average classification Pr, Rc, f1, and AUROC of S2 in the validation set were (89.00%, 80.00%, 0.81, and 0.79, respectively) higher than those of S0 (76.00%, 67.00%, 0.69, and 0.65, respectively), S1 (60.00%, 60.00%, 0.60, 0.66, and respectively), and S3 (77.00%, 73.00%, 0.74, 0.72, respectively). The degree of coincidence between S2 and the histopathological method for differentiating between benign and malignant breast lesions was high (κ = 0.749). The S2 strategy can improve the robustness of the DenseNet201 model in relatively small breast DCE-MRI datasets, and this is a reliable method to increase the Ac of discriminating benign from malignant breast lesions on DCE-MRI.
Collapse
Affiliation(s)
- Mingzhu Meng
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Ming Zhang
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Dong Shen
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
| | - Guangyuan He
- Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, Changzhou, China
- * Correspondence: Guangyuan He, Department of Radiology, The Affiliated Changzhou No. 2 People’s Hospital of Nanjing Medical University, No.68 Gehuzhong Rd, Changzhou 213164, Jiangsu Province, China (e-mail: )
| |
Collapse
|
13
|
Li Y, Zheng X, Xie F, Ye L, Bignami E, Tandon YK, Rodríguez M, Gu Y, Sun J. Development and validation of the artificial intelligence (AI)-based diagnostic model for bronchial lumen identification. Transl Lung Cancer Res 2022; 11:2261-2274. [PMID: 36519015 PMCID: PMC9742630 DOI: 10.21037/tlcr-22-761] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 11/08/2022] [Indexed: 08/29/2023]
Abstract
BACKGROUND Bronchoscopy is a key step in the diagnosis and treatment of respiratory diseases. However, the level of expertise varies among different bronchoscopists. Artificial intelligence (AI) may help them identify bronchial lumens. Thus, a bronchoscopy quality-control system based on AI was built to improve the performance of bronchoscopists. METHODS This single-center observational study consecutively collected bronchoscopy videos from Shanghai Chest Hospital and segmented each video into 31 different anatomical locations to develop an AI-assisted system based on a convolutional neural network (CNN) model. We then designed a single-center trial to compare the accuracy of lumen recognition by bronchoscopists with and without the assistance of the AI system. RESULTS A total of 28,441 qualified images of bronchial lumen were used to train the CNNs. In the cross-validation set, the optimal accuracy of the six models was between 91.83% and 96.62%. In the test set, the visual geometry group 16 (VGG-16) achieved optimal performance with an accuracy of 91.88%, and an area under the curve of 0.995. In the clinical evaluation, the accuracy rate of the AI system alone was 54.30% (202/372). For the identification of bronchi except for segmental bronchi, the accuracy was 82.69% (129/156). In group 1, the recognition accuracy rates of doctors A, B, a and b alone were 42.47%, 34.68%, 28.76%, and 29.57%, respectively, but increased to 57.53%, 54.57%, 54.57%, and 46.24% respectively when combined with the AI system. Similarly, in group 2, the recognition accuracy rates of doctors C, D, c, and d were 37.90%, 41.40%, 30.91%, and 33.60% respectively, but increased to 51.61%, 47.85%, 53.49%, and 54.30% respectively, when combined with the AI system. Except for doctor D, the accuracy of doctors in recognizing lumen was significantly higher with AI assistance than without AI assistance, regardless of their experience (P<0.001). CONCLUSIONS Our AI system could better recognize bronchial lumen and reduce differences in the operation levels of different bronchoscopists. It could be used to improve the quality of everyday bronchoscopies.
Collapse
Affiliation(s)
- Ying Li
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Respiratory Endoscopy, Shanghai, China
| | - Xiaoxuan Zheng
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Respiratory Endoscopy, Shanghai, China
| | - Fangfang Xie
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Respiratory Endoscopy, Shanghai, China
| | - Lin Ye
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Respiratory Endoscopy, Shanghai, China
| | - Elena Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | | | - María Rodríguez
- Department of Thoracic Surgery, Clínica Universidad de Navarra, Madrid, Spain
| | - Yun Gu
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, China
| | - Jiayuan Sun
- Department of Respiratory Endoscopy, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Department of Respiratory and Critical Care Medicine, Shanghai Chest Hospital, Shanghai Jiao Tong University, Shanghai, China
- Shanghai Engineering Research Center of Respiratory Endoscopy, Shanghai, China
| |
Collapse
|
14
|
Deng Y, Chen Y, Xie L, Wang L, Zhan J. The investigation of construction and clinical application of image recognition technology assisted bronchoscopy diagnostic model of lung cancer. Front Oncol 2022; 12:1001840. [PMID: 36387178 PMCID: PMC9647035 DOI: 10.3389/fonc.2022.1001840] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2022] [Accepted: 10/07/2022] [Indexed: 12/02/2022] Open
Abstract
Background The incidence and mortality of lung cancer ranks first in China. Bronchoscopy is one of the most common diagnostic methods for lung cancer. In recent years, image recognition technology(IRT) has been more and more widely studied and applied in the medical field. We developed a diagnostic model of lung cancer under bronchoscopy based on deep learning method and tried to classify pathological types. Methods A total of 2238 lesion images were collected retrospectively from 666 cases of lung cancer diagnosed by pathology in the bronchoscopy center of the Third Xiangya Hospital from Oct.01 2017 to Dec.31 2020 and 152 benign cases from Jun.01 2015 to Dec.31 2020. The benign and malignant images were divided into training, verification and test set according to 7:1:2 respectively. The model was trained and tested based on deep learning method. We also tried to classify different pathological types of lung cancer using the model. Furthermore, 9 clinicians with different experience were invited to diagnose the same test images and the results were compared with the model. Results The diagnostic model took a total of 30s to diagnose 467 test images. The overall accuracy, sensitivity, specificity and area under curve (AUC) of the model to differentiate benign and malignant lesions were 0.951, 0.978, 0.833 and 0.940, which were equivalent to the judgment results of 2 doctors in the senior group and higher than those of other doctors. In the classification of squamous cell carcinoma (SCC) and adenocarcinoma (AC), the overall accuracy was 0.745, including 0.790 for SCC, 0.667 for AC and AUC was 0.728. Conclusion The performance of our diagnostic model to distinguish benign and malignant lesions in bronchoscopy is roughly the same as that of experienced clinicians and the efficiency is much higher than manually. Our study verifies the possibility of applying IRT in diagnosis of lung cancer during white light bronchoscopy.
Collapse
Affiliation(s)
- Yihong Deng
- Department of Pulmonary and Critical Care Medicine, the Third Xiangya Hospital, Central South University, Changsha, Hunan, China
| | - Yuan Chen
- Department of Computer Science, School of Informatics, Xiamen University, Xiamen, Fujian, China
| | - Lihua Xie
- Department of Pulmonary and Critical Care Medicine, the Third Xiangya Hospital, Central South University, Changsha, Hunan, China
- *Correspondence: Lihua Xie, ; Liansheng Wang, ; Juan Zhan,
| | - Liansheng Wang
- Department of Computer Science, School of Informatics, Xiamen University, Xiamen, Fujian, China
- *Correspondence: Lihua Xie, ; Liansheng Wang, ; Juan Zhan,
| | - Juan Zhan
- Department of Oncology, Zhongshan Hospital affiliated to Xiamen University, Xiamen, Fujian, China
- *Correspondence: Lihua Xie, ; Liansheng Wang, ; Juan Zhan,
| |
Collapse
|
15
|
Chen Y, Chen X. A brain-like classification method for computed tomography images based on adaptive feature matching dual-source domain heterogeneous transfer learning. Front Hum Neurosci 2022; 16:1019564. [PMID: 36304588 PMCID: PMC9592699 DOI: 10.3389/fnhum.2022.1019564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/07/2022] [Indexed: 12/04/2022] Open
Abstract
Transfer learning can improve the robustness of deep learning in the case of small samples. However, when the semantic difference between the source domain data and the target domain data is large, transfer learning easily introduces redundant features and leads to negative transfer. According the mechanism of the human brain focusing on effective features while ignoring redundant features in recognition tasks, a brain-like classification method based on adaptive feature matching dual-source domain heterogeneous transfer learning is proposed for the preoperative aided diagnosis of lung granuloma and lung adenocarcinoma for patients with solitary pulmonary solid nodule in the case of small samples. The method includes two parts: (1) feature extraction and (2) feature classification. In the feature extraction part, first, By simulating the feature selection mechanism of the human brain in the process of drawing inferences about other cases from one instance, an adaptive selected-based dual-source domain feature matching network is proposed to determine the matching weight of each pair of feature maps and each pair of convolution layers between the two source networks and the target network, respectively. These two weights can, respectively, adaptive select the features in the source network that are conducive to the learning of the target task, and the destination of feature transfer to improve the robustness of the target network. Meanwhile, a target network based on diverse branch block is proposed, which made the target network have different receptive fields and complex paths to further improve the feature expression ability of the target network. Second, the convolution kernel of the target network is used as the feature extractor to extract features. In the feature classification part, an ensemble classifier based on sparse Bayesian extreme learning machine is proposed that can automatically decide how to combine the output of base classifiers to improve the classification performance. Finally, the experimental results (the AUCs were 0.9542 and 0.9356, respectively) on the data of two center data show that this method can provide a better diagnostic reference for doctors.
Collapse
Affiliation(s)
- Yehang Chen
- Laboratory of Artificial Intelligence of Biomedicine, Guilin University of Aerospace Technology, Guilin, China
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Xiangmeng Chen
- Department of Radiology, Jiangmen Central Hospital, Jiangmen, China
- *Correspondence: Xiangmeng Chen,
| |
Collapse
|
16
|
Zaalouk AM, Ebrahim GA, Mohamed HK, Hassan HM, Zaalouk MMA. A Deep Learning Computer-Aided Diagnosis Approach for Breast Cancer. Bioengineering (Basel) 2022; 9:bioengineering9080391. [PMID: 36004916 PMCID: PMC9405040 DOI: 10.3390/bioengineering9080391] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Revised: 07/19/2022] [Accepted: 08/12/2022] [Indexed: 11/16/2022] Open
Abstract
Breast cancer is a gigantic burden on humanity, causing the loss of enormous numbers of lives and amounts of money. It is the world’s leading type of cancer among women and a leading cause of mortality and morbidity. The histopathological examination of breast tissue biopsies is the gold standard for diagnosis. In this paper, a computer-aided diagnosis (CAD) system based on deep learning is developed to ease the pathologist’s mission. For this target, five pre-trained convolutional neural network (CNN) models are analyzed and tested—Xception, DenseNet201, InceptionResNetV2, VGG19, and ResNet152—with the help of data augmentation techniques, and a new approach is introduced for transfer learning. These models are trained and tested with histopathological images obtained from the BreakHis dataset. Multiple experiments are performed to analyze the performance of these models through carrying out magnification-dependent and magnification-independent binary and eight-class classifications. The Xception model has shown promising performance through achieving the highest classification accuracies for all the experiments. It has achieved a range of classification accuracies from 93.32% to 98.99% for magnification-independent experiments and from 90.22% to 100% for magnification-dependent experiments.
Collapse
Affiliation(s)
- Ahmed M. Zaalouk
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, Cairo 11517, Egypt
- School of Computing, Coventry University—Egypt Branch, Hosted at the Knowledge Hub Universities, Cairo, Egypt
- Correspondence: (A.M.Z.); (G.A.E.)
| | - Gamal A. Ebrahim
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, Cairo 11517, Egypt
- Correspondence: (A.M.Z.); (G.A.E.)
| | - Hoda K. Mohamed
- Computer and Systems Engineering Department, Faculty of Engineering, Ain Shams University, Cairo 11517, Egypt
| | - Hoda Mamdouh Hassan
- Department of Information Sciences and Technology, College of Engineering and Computing, George Mason University, Fairfax, VA 22030, USA
| | | |
Collapse
|
17
|
A computer-aided diagnosis system for detecting various diabetic retinopathy grades based on a hybrid deep learning technique. Med Biol Eng Comput 2022; 60:2015-2038. [PMID: 35545738 PMCID: PMC9225981 DOI: 10.1007/s11517-022-02564-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 03/25/2022] [Indexed: 12/23/2022]
Abstract
Diabetic retinopathy (DR) is a serious disease that may cause vision loss unawares without any alarm. Therefore, it is essential to scan and audit the DR progress continuously. In this respect, deep learning techniques achieved great success in medical image analysis. Deep convolution neural network (CNN) architectures are widely used in multi-label (ML) classification. It helps in diagnosing normal and various DR grades: mild, moderate, and severe non-proliferative DR (NPDR) and proliferative DR (PDR). DR grades are formulated by appearing multiple DR lesions simultaneously on the color retinal fundus images. Many lesion types have various features that are difficult to segment and distinguished by utilizing conventional and hand-crafted methods. Therefore, the practical solution is to utilize an effective CNN model. In this paper, we present a novel hybrid, deep learning technique, which is called E-DenseNet. We integrated EyeNet and DenseNet models based on transfer learning. We customized the traditional EyeNet by inserting the dense blocks and optimized the resulting hybrid E-DensNet model's hyperparameters. The proposed system based on the E-DenseNet model can accurately diagnose healthy and different DR grades from various small and large ML color fundus images. We trained and tested our model on four different datasets that were published from 2006 to 2019. The proposed system achieved an average accuracy (ACC), sensitivity (SEN), specificity (SPE), Dice similarity coefficient (DSC), the quadratic Kappa score (QKS), and the calculation time (T) in minutes (m) equal [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], 0.883, and 3.5m respectively. The experiments show promising results as compared with other systems.
Collapse
|
18
|
Munthuli A, Intanai J, Tossanuch P, Pooprasert P, Ingpochai P, Boonyasatian S, Kittithammo K, Thammarach P, Boonmak T, Khaengthanyakan S, Yaemsuk A, Vanichvarodom P, Phienphanich P, Pongcharoen P, Sakonlaya D, Sitthiwatthanawong P, Wetchawalit S, Chakkavittumrong P, Thongthawee B, Pathomjaruwat T, Tantibundhit C. Extravasation Screening and Severity Prediction from Skin Lesion Image using Deep Neural Networks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1827-1833. [PMID: 36086628 DOI: 10.1109/embc48229.2022.9871115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Extravasation occurs secondary to the leakage of medication from blood vessels into the surrounding tissue during intravenous administration resulting in significant soft tissue injury and necrosis. If treatment is delayed, invasive management such as surgical debridement, skin grafting, and even amputation may be required. Thus, it is imperative to develop a smartphone application for predicting extravasation severity from skin image. Two Deep Neural Network (DNN) architectures, U-Net and DenseNet-121, were used to segment skin and lesion, and to classify extravasation severity. Sensitivity and specificity for predicting between asymptomatic and abnormal cases were 77.78 and 90.24%. For each severity in abnormal cases, mild extravasation attained the highest F1-score of 0.8049, followed by severe extravasation of 0.6429, and moderate extravasation of 0.6250. The F1-score of moderate-to-severe extravasation classification can improve by applying the our proposed rule-based for multi-class classification. These findings proposed a novel and feasible DNN approach for screening extravasation from skin images. The implementation of DNN-based applications on mobile devices has a strong potential for clinical application in low-resource countries. Clinical relevance- The application can serve as a valuable tool in monitoring when extravasation occurs during intravaneous administration. It can also help in the scheduling process across worksite to reduce the risks associated with working shifts.
Collapse
|
19
|
AIM in Respiratory Disorders. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
20
|
Shock due to an Obstructed Endotracheal Tube. J Crit Care Med (Targu Mures) 2021; 7:308-311. [PMID: 34934822 PMCID: PMC8647666 DOI: 10.2478/jccm-2021-0027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2021] [Accepted: 07/20/2021] [Indexed: 11/20/2022] Open
Abstract
Endotracheal tube obstruction by a mucus plug causing a ball-valve effect is a rare but significant complication. The inability to pass a suction catheter through the endotracheal tube with high peak and plateau pressure differences are classical features of an endotracheal tube obstruction. A case is described of endotracheal tube obstruction from a mucus plug that compounded severe respiratory acidosis and hypotension in a patient who simultaneously had abdominal compartment syndrome. The mucus plug was not identified until a bronchoscopic assessment of the airway was performed. Due to the absence of classical signs, the delayed identification of the obstructing mucus plug exacerbated diagnostic confusion. It resulted in various treatments being trialed whilst the patient continued to deteriorate from the evasive offending culprit. We suggest that earlier and more routine use of bronchoscopy should be employed in an intensive care unit, especially as a definitive way to rule out endotracheal obstruction.
Collapse
|
21
|
Kavithaa G, Balakrishnan P, Yuvaraj SA. Lung Cancer Detection and Improving Accuracy Using Linear Subspace Image Classification Algorithm. Interdiscip Sci 2021; 13:779-786. [PMID: 34351570 DOI: 10.1007/s12539-021-00468-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 07/15/2021] [Accepted: 07/23/2021] [Indexed: 06/13/2023]
Abstract
The ability to identify lung cancer at an early stage is critical, because it can help patients live longer. However, predicting the affected area while diagnosing cancer is a huge challenge. An intelligent computer-aided diagnostic system can be utilized to detect and diagnose lung cancer by detecting the damaged region. The suggested Linear Subspace Image Classification Algorithm (LSICA) approach classifies images in a linear subspace. This methodology is used to accurately identify the damaged region, and it involves three steps: image enhancement, segmentation, and classification. The spatial image clustering technique is used to quickly segment and identify the impacted area in the image. LSICA is utilized to determine the accuracy value of the affected region for classification purposes. Therefore, a lung cancer detection system with classification-dependent image processing is used for lung cancer CT imaging. Therefore, a new method to overcome these deficiencies of the process for detection using LSICA is proposed in this work on lung cancer. MATLAB has been used in all programs. A proposed system designed to easily identify the affected region with help of the classification technique to enhance and get more accurate results.
Collapse
Affiliation(s)
- G Kavithaa
- Department of Electronics and Communication Engineering, Government College of Engineering, Salem, Tamilnadu, India.
| | - P Balakrishnan
- Malla Reddy Engineering College for Women (Autonomous), Hyderabad, 500100, India
| | - S A Yuvaraj
- Department of ECE, GRT Institute of Engineering and Technology, Tiruttani, Tamilnadu, India
| |
Collapse
|
22
|
Xing J, Li Z, Wang B, Qi Y, Yu B, Zanjani FG, Zheng A, Duits R, Tan T. Lesion Segmentation in Ultrasound Using Semi-Pixel-Wise Cycle Generative Adversarial Nets. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:2555-2565. [PMID: 32149651 DOI: 10.1109/tcbb.2020.2978470] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Breast cancer is the most common invasive cancer with the highest cancer occurrence in females. Handheld ultrasound is one of the most efficient ways to identify and diagnose the breast cancer. The area and the shape information of a lesion is very helpful for clinicians to make diagnostic decisions. In this study we propose a new deep-learning scheme, semi-pixel-wise cycle generative adversarial net (SPCGAN) for segmenting the lesion in 2D ultrasound. The method takes the advantage of a fully convolutional neural network (FCN) and a generative adversarial net to segment a lesion by using prior knowledge. We compared the proposed method to a fully connected neural network and the level set segmentation method on a test dataset consisting of 32 malignant lesions and 109 benign lesions. Our proposed method achieved a Dice similarity coefficient (DSC) of 0.92 while FCN and the level set achieved 0.90 and 0.79 respectively. Particularly, for malignant lesions, our method increases the DSC (0.90) of the fully connected neural network to 0.93 significantly (p 0.001). The results show that our SPCGAN can obtain robust segmentation results. The framework of SPCGAN is particularly effective when sufficient training samples are not available compared to FCN. Our proposed method may be used to relieve the radiologists' burden for annotation.
Collapse
|
23
|
Chen H, Guo S, Hao Y, Fang Y, Fang Z, Wu W, Liu Z, Li S. Auxiliary Diagnosis for COVID-19 with Deep Transfer Learning. J Digit Imaging 2021; 34:231-241. [PMID: 33634413 PMCID: PMC7906243 DOI: 10.1007/s10278-021-00431-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 01/21/2021] [Accepted: 02/02/2021] [Indexed: 12/30/2022] Open
Abstract
To assist physicians identify COVID-19 and its manifestations through the automatic COVID-19 recognition and classification in chest CT images with deep transfer learning. In this retrospective study, the used chest CT image dataset covered 422 subjects, including 72 confirmed COVID-19 subjects (260 studies, 30,171 images), 252 other pneumonia subjects (252 studies, 26,534 images) that contained 158 viral pneumonia subjects and 94 pulmonary tuberculosis subjects, and 98 normal subjects (98 studies, 29,838 images). In the experiment, subjects were split into training (70%), validation (15%) and testing (15%) sets. We utilized the convolutional blocks of ResNets pretrained on the public social image collections and modified the top fully connected layer to suit our task (the COVID-19 recognition). In addition, we tested the proposed method on a finegrained classification task; that is, the images of COVID-19 were further split into 3 main manifestations (ground-glass opacity with 12,924 images, consolidation with 7418 images and fibrotic streaks with 7338 images). Similarly, the data partitioning strategy of 70%-15%-15% was adopted. The best performance obtained by the pretrained ResNet50 model is 94.87% sensitivity, 88.46% specificity, 91.21% accuracy for COVID-19 versus all other groups, and an overall accuracy of 89.01% for the three-category classification in the testing set. Consistent performance was observed from the COVID-19 manifestation classification task on images basis, where the best overall accuracy of 94.08% and AUC of 0.993 were obtained by the pretrained ResNet18 (P < 0.05). All the proposed models have achieved much satisfying performance and were thus very promising in both the practical application and statistics. Transfer learning is worth for exploring to be applied in recognition and classification of COVID-19 on CT images with limited training data. It not only achieved higher sensitivity (COVID-19 vs the rest) but also took far less time than radiologists, which is expected to give the auxiliary diagnosis and reduce the workload for the radiologists.
Collapse
Affiliation(s)
- Hongtao Chen
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Shuanshuan Guo
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Yanbin Hao
- School of Data Science, University of Science and Technology of China, Hefei, 230026, Anhui, China.
- Department of Computer Science, City University of Hong Kong, Hong Kong, 999077, China.
| | - Yijie Fang
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China
| | - Zhaoxiong Fang
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Wenhao Wu
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China
| | - Zhigang Liu
- The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519000, Guangdong, China
| | - Shaolin Li
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, 519000, Guangdong, China.
| |
Collapse
|
24
|
Das N, Topalovic M, Janssens W. AIM in Respiratory Disorders. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_178-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
25
|
Zhou T, Tan T, Pan X, Tang H, Li J. Fully automatic deep learning trained on limited data for carotid artery segmentation from large image volumes. Quant Imaging Med Surg 2021; 11:67-83. [PMID: 33392012 PMCID: PMC7719941 DOI: 10.21037/qims-20-286] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Accepted: 07/21/2020] [Indexed: 11/06/2022]
Abstract
BACKGROUND The objectives of this study were to develop a 3D convolutional deep learning framework (CarotidNet) for fully automatic segmentation of carotid bifurcations in computed tomography angiography (CTA) images and to facilitate the quantification of carotid stenosis and risk assessment of stroke. METHODS Our pipeline was a two-stage cascade network that included a localization phase and a segmentation phase. The network framework was based on the 3D version of U-Net, but was refined in three ways: (I) by adding residual connections and a deep supervision strategy to cope with the vanishing problem in back-propagation; (II) by adopting dilated convolution in order to strengthen the capacity to capture contextual information; and (III) by establishing a hybrid objective function to address the extreme imbalance between foreground and background voxels. RESULTS We trained our networks on 15 cases and evaluated their performance based on 41 cases from the MICCAI Challenge 2009 dataset. A Dice similarity coefficient of 82.3% was achieved for the test cases. CONCLUSIONS We developed a carotid segmentation method based on U-Net that can segment tiny carotid bifurcation lumens from very large backgrounds with no manual intervention. This was the first attempt to use deep learning to achieve carotid bifurcation segmentation in 3D CTA images. Our results indicate that deep learning is a promising method for automatically extracting carotid bifurcation lumens.
Collapse
Affiliation(s)
- Tianshu Zhou
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Tao Tan
- Department of Mathematics and Computer Science, Eindhoven University of Technology and Radiology, Netherlands Cancer Institute, Amsterdam, the Netherlands
| | - Xiaoyan Pan
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Hui Tang
- Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus MC, 3000 CA Rotterdam, the Netherlands
| | - Jingsong Li
- Engineering Research Center of EMR and Intelligent Expert System, Ministry of Education, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
| |
Collapse
|
26
|
Suri JS, Puvvula A, Majhail M, Biswas M, Jamthikar AD, Saba L, Faa G, Singh IM, Oberleitner R, Turk M, Srivastava S, Chadha PS, Suri HS, Johri AM, Nambi V, Sanches JM, Khanna NN, Viskovic K, Mavrogeni S, Laird JR, Bit A, Pareek G, Miner M, Balestrieri A, Sfikakis PP, Tsoulfas G, Protogerou A, Misra DP, Agarwal V, Kitas GD, Kolluri R, Teji J, Porcu M, Al-Maini M, Agbakoba A, Sockalingam M, Sexena A, Nicolaides A, Sharma A, Rathore V, Viswanathan V, Naidu S, Bhatt DL. Integration of cardiovascular risk assessment with COVID-19 using artificial intelligence. Rev Cardiovasc Med 2020; 21:541-560. [PMID: 33387999 DOI: 10.31083/j.rcm.2020.04.236] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 12/03/2020] [Accepted: 12/08/2020] [Indexed: 11/06/2022] Open
Abstract
Artificial Intelligence (AI), in general, refers to the machines (or computers) that mimic "cognitive" functions that we associate with our mind, such as "learning" and "solving problem". New biomarkers derived from medical imaging are being discovered and are then fused with non-imaging biomarkers (such as office, laboratory, physiological, genetic, epidemiological, and clinical-based biomarkers) in a big data framework, to develop AI systems. These systems can support risk prediction and monitoring. This perspective narrative shows the powerful methods of AI for tracking cardiovascular risks. We conclude that AI could potentially become an integral part of the COVID-19 disease management system. Countries, large and small, should join hands with the WHO in building biobanks for scientists around the world to build AI-based platforms for tracking the cardiovascular risk assessment during COVID-19 times and long-term follow-up of the survivors.
Collapse
Affiliation(s)
- Jasjit S Suri
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, 95747, CA, USA
| | - Anudeep Puvvula
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, 95747, CA, USA
- Annu's Hospitals for Skin and Diabetes, Nellore, 524001, AP, India
| | - Misha Majhail
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, 95747, CA, USA
- Oakmount High School and AtheroPoint™, Roseville, 95747, CA, USA
| | | | - Ankush D Jamthikar
- Department of ECE, Visvesvaraya National Institute of Technology, Nagpur, 440010, MH, India
| | - Luca Saba
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09100, Cagliari, Italy
| | - Gavino Faa
- Department of Pathology, 09100, AOU of Cagliari, Italy
| | - Inder M Singh
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, 95747, CA, USA
| | | | - Monika Turk
- The Hanse-Wissenschaftskolleg Institute for Advanced Study, 27749, Delmenhorst, Germany
| | - Saurabh Srivastava
- School of Computing Science & Engineering, Galgotias University, 201301, Gr. Noida, India
| | - Paramjit S Chadha
- Stroke Monitoring and Diagnostic Division, AtheroPoint™, Roseville, 95747, CA, USA
| | | | - Amer M Johri
- Department of Medicine, Division of Cardiology, Queen's University, Kingston, B0P 1R0, Ontario, Canada
| | - Vijay Nambi
- Department of Cardiology, Baylor College of Medicine, 77001, TX, USA
| | - J Miguel Sanches
- Institute of Systems and Robotics, Instituto Superior Tecnico, 1000-001, Lisboa, Portugal
| | - Narendra N Khanna
- Department of Cardiology, Indraprastha APOLLO Hospitals, 110001, New Delhi, India
| | | | - Sophie Mavrogeni
- Cardiology Clinic, Onassis Cardiac Surgery Center, 104 31, Athens, Greece
| | - John R Laird
- Heart and Vascular Institute, Adventist Health St. Helena, St Helena, 94574, CA, USA
| | - Arindam Bit
- Department of Biomedical Engineering, NIT, Raipur, 783334, CG, India
| | - Gyan Pareek
- Minimally Invasive Urology Institute, Brown University, Providence, 02901, Rhode Island, USA
| | - Martin Miner
- Men's Health Center, Miriam Hospital Providence, 02901, Rhode Island, USA
| | - Antonella Balestrieri
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09100, Cagliari, Italy
| | - Petros P Sfikakis
- Rheumatology Unit, National Kapodistrian University of Athens, 104 31, Greece
| | - George Tsoulfas
- Aristoteleion University of Thessaloniki, 544 53, Thessaloniki, Greece
| | | | - Durga Prasanna Misra
- Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, 226001, UP, India
| | - Vikas Agarwal
- Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, 226001, UP, India
| | - George D Kitas
- Academic Affairs, Dudley Group NHS Foundation Trust, DY1, Dudley, UK
- Arthritis Research UK Epidemiology Unit, Manchester University, M13, Manchester, UK
| | | | - Jagjit Teji
- Ann and Robert H. Lurie Children's Hospital of Chicago, 60601, Chicago, USA
| | - Michele Porcu
- Department of Radiology, Azienda Ospedaliero Universitaria (A.O.U.), 09100, Cagliari, Italy
| | - Mustafa Al-Maini
- Allergy, Clinical Immunology and Rheumatology Institute, M3H 6A7, Toronto, Canada
| | | | | | - Ajit Sexena
- Department of Cardiology, Indraprastha APOLLO Hospitals, 110001, New Delhi, India
| | - Andrew Nicolaides
- Vascular Screening and Diagnostic Centre and University of Nicosia Medical School, 999058, Cyprus
| | - Aditya Sharma
- Division of Cardiovascular Medicine, University of Virginia, Charlottesville, 22901, VA, USA
| | - Vijay Rathore
- Nephrology Department, Kaiser Permanente, Sacramento, 94203, CA, USA
| | - Vijay Viswanathan
- MV Hospital for Diabetes and Professor M Viswanathan Diabetes Research Centre, 600001, Chennai, India
| | - Subbaram Naidu
- Electrical Engineering Department, University of Minnesota, Duluth, 55801, MN, USA
| | - Deepak L Bhatt
- Brigham and Women's Hospital Heart & Vascular Center, Harvard Medical School, Boston, 02108, MA, USA
| |
Collapse
|
27
|
Huang F, Tan T, Dashtbozorg B, Zhou Y, Romeny BMTH. From Local to Global: A Graph Framework for Retinal Artery/Vein Classification. IEEE Trans Nanobioscience 2020; 19:589-597. [PMID: 32746331 DOI: 10.1109/tnb.2020.3004481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Fundus photography has been widely used for inspecting eye disorders by ophthalmologists or computer algorithms. Biomarkers related to retinal vessels plays an essential role to detect early diabetes. To quantify vascular biomarkers or the corresponding changes, an accurate artery and vein classification is necessary. In this work, we propose a new framework to boost local vessel classification with a global vascular network model using graph convolution. We compare our proposed method with two traditional state-of-the-art methods on a testing dataset of 750 images from the Maastricht Study. After incorporating global information, our model achieves the best accuracy of 86.45% compared to 85.5% from convolutional neural networks (CNN) and 82.9% from handcrafted pixel feature classification (HPFC). Our model also obtains the best area under receiver operating characteristic curve (AUC) of 0.95, compared to 0.93 from CNN and 0.90 from HPFC. The new classification framework has the advantage of easy deployment on top of local classification features. It corrects the local classification error by minimizing global classification error and it brings free additional classification performance.
Collapse
|
28
|
Liu X, Wang C, Bai J, Liao G. Fine-tuning Pre-trained Convolutional Neural Networks for Gastric Precancerous Disease Classification on Magnification Narrow-band Imaging Images. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.10.100] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
29
|
Abstract
Women’s cancers remain a major challenge for many health systems. Between 1991 and 2017, the death rate for all major cancers fell continuously in the United States, excluding uterine cervix and uterine corpus cancers. Together with HPV (Human Papillomavirus) testing and cytology, colposcopy has played a central role in cervical cancer screening. This medical procedure allows physicians to view the cervix at a magnification of up to 10%. This paper presents an automated colposcopy image analysis framework for the classification of precancerous and cancerous lesions of the uterine cervix. This framework is based on an ensemble of MobileNetV2 networks. Our experimental results show that this method achieves accuracies of 83.33% and 91.66% on the four-class and binary classification tasks, respectively. These results are promising for the future use of automatic classification methods based on deep learning as tools to support medical doctors.
Collapse
|
30
|
Zhang T, Luo YM, Li P, Liu PZ, Du YZ, Sun P, Dong B, Xue H. Cervical precancerous lesions classification using pre-trained densely connected convolutional networks with colposcopy images. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101566] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
|
31
|
Masood A, Yang P, Sheng B, Li H, Li P, Qin J, Lanfranchi V, Kim J, Feng DD. Cloud-Based Automated Clinical Decision Support System for Detection and Diagnosis of Lung Cancer in Chest CT. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2019; 8:4300113. [PMID: 31929952 PMCID: PMC6946021 DOI: 10.1109/jtehm.2019.2955458] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2019] [Revised: 09/02/2019] [Accepted: 11/08/2019] [Indexed: 12/29/2022]
Abstract
Lung cancer is a major cause for cancer-related deaths. The detection of pulmonary cancer in the early stages can highly increase survival rate. Manual delineation of lung nodules by radiologists is a tedious task. We developed a novel computer-aided decision support system for lung nodule detection based on a 3D Deep Convolutional Neural Network (3DDCNN) for assisting the radiologists. Our decision support system provides a second opinion to the radiologists in lung cancer diagnostic decision making. In order to leverage 3-dimensional information from Computed Tomography (CT) scans, we applied median intensity projection and multi-Region Proposal Network (mRPN) for automatic selection of potential region-of-interests. Our Computer Aided Diagnosis (CAD) system has been trained and validated using LUNA16, ANODE09, and LIDC-IDR datasets; the experiments demonstrate the superior performance of our system, attaining sensitivity, specificity, AUROC, accuracy, of 98.4%, 92%, 96% and 98.51% with 2.1 FPs per scan. We integrated cloud computing, trained and validated our Cloud-Based 3DDCNN on the datasets provided by Shanghai Sixth People's Hospital, as well as LUNA16, ANODE09, and LIDC-IDR. Our system outperformed the state-of-the-art systems and obtained an impressive 98.7% sensitivity at 1.97 FPs per scan. This shows the potentials of deep learning, in combination with cloud computing, for accurate and efficient lung nodule detection via CT imaging, which could help doctors and radiologists in treating lung cancer patients.
Collapse
Affiliation(s)
- Anum Masood
- Department of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghai200240China
| | - Po Yang
- Department of Computer ScienceUniversity of SheffieldSheffieldS1 4DPU.K.
| | - Bin Sheng
- Department of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghai200240China
| | - Huating Li
- Shanghai Jiao Tong University Affiliated Sixth People’s HospitalShanghai200233China
| | - Ping Li
- Department of ComputingThe Hong Kong Polytechnic UniversityHong Kong
| | - Jing Qin
- Centre for Smart Health, School of NursingThe Hong Kong Polytechnic UniversityHong Kong
| | | | - Jinman Kim
- Biomedical and Multimedia Information Technology Research Group, School of Information TechnologiesThe University of SydneySydneyNSW2006Australia
| | - David Dagan Feng
- Biomedical and Multimedia Information Technology Research Group, School of Information TechnologiesThe University of SydneySydneyNSW2006Australia
| |
Collapse
|
32
|
Sun Y, Shan C, Tan T, Tong T, Wang W, Pourtaherian A, de With PHN. Detecting discomfort in infants through facial expressions. Physiol Meas 2019; 40:115006. [PMID: 31703212 DOI: 10.1088/1361-6579/ab55b3] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Detecting discomfort status of infants is particularly clinically relevant. Late treatment of discomfort infants can lead to adverse problems such as abnormal brain development, central nervous system damage and changes in responsiveness of the neuroendocrine and immune systems to stress at maturity. In this study, we exploit deep convolutional neural network (CNN) algorithms to address the problem of discomfort detection for infants by analyzing their facial expressions. APPROACH A dataset of 55 videos about facial expressions, recorded from 24 infants, is used in our study. Given the limited available data for training, we employ a pre-trained CNN model, which is followed by fine-tuning the networks using a public dataset with labeled facial expressions (the shoulder-pain dataset). The CNNs are further refined with our data of infants. MAIN RESULTS Using a two-fold cross-validation, we achieve an area under the curve (AUC) value of 0.96, which is substantially higher than the results without any pre-training steps (AUC = 0.77). Our method also achieves better results than the existing method based on handcrafted features. By fusing individual frame results, the AUC is further improved from 0.96 to 0.98. SIGNIFICANCE The proposed system has great potential for continuous discomfort and pain monitoring in clinical practice.
Collapse
Affiliation(s)
- Yue Sun
- Eindhoven University of Technology, Eindhoven, 5612 WH, The Netherlands
| | | | | | | | | | | | | |
Collapse
|