1
|
Dalky A, Altawalbih M, Alshanik F, Khasawneh RA, Tawalbeh R, Al-Dekah AM, Alrawashdeh A, Quran TO, ALBashtawy M. Global Research Trends, Hotspots, Impacts, and Emergence of Artificial Intelligence and Machine Learning in Health and Medicine: A 25-Year Bibliometric Analysis. Healthcare (Basel) 2025; 13:892. [PMID: 40281841 PMCID: PMC12026717 DOI: 10.3390/healthcare13080892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2025] [Revised: 04/02/2025] [Accepted: 04/08/2025] [Indexed: 04/29/2025] Open
Abstract
Background/Objectives: The increasing application of artificial intelligence (AI) and machine learning (ML) in health and medicine has attracted a great deal of research interest in recent decades. This study aims to provide a global and historical picture of research concerning AI and ML in health and medicine. Methods: We used the Scopus database for searching and extracted articles published between 2000 and 2024. Then, we generated information about productivity, citations, collaboration, most impactful research topics, emerging research topics, and author keywords using Microsoft Excel 365 and VOSviewer software (version 1.6.20). Results: We retrieved a total of 22,113 research articles, with a notable surge in research activity in recent years. Core journals were Scientific Reports and IEEE Access, and core institutions included Harvard Medical School and the Ministry of Education of the People's Republic of China, while core countries comprised the United States, China, India, the United Kingdom, and Saudi Arabia. Citation trends indicated substantial growth and recognition of AI's and ML impact on health and medicine. Frequent author keywords identified key research hotspots, including specific diseases like Alzheimer's disease, Parkinson's diseases, COVID-19, and diabetes. The author keyword analysis identified "deep learning", "convolutional neural network", and "classification" as dominant research themes. Conclusions: AI's transformative potential in AI and ML in health and medicine holds promise for improving global health outcomes.
Collapse
Affiliation(s)
- Alaa Dalky
- Department of Health Management and Policy, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan;
| | - Mahmoud Altawalbih
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan; (M.A.); (R.T.); (A.A.)
| | - Farah Alshanik
- Department of Computer Science, Faculty of Computer & Information Technology, Jordan University of Science and Technology, Irbid 22110, Jordan;
| | - Rawand A. Khasawneh
- Department of Clinical Pharmacy, Faculty of Pharmacy, Jordan University of Science and Technology, Irbid 22110, Jordan;
| | - Rawan Tawalbeh
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan; (M.A.); (R.T.); (A.A.)
| | - Arwa M. Al-Dekah
- Department of Biotechnology and Genetic Engineering, Faculty of Science and Arts, Jordan University of Science and Technology, Irbid 22110, Jordan;
| | - Ahmad Alrawashdeh
- Department of Allied Medical Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid 22110, Jordan; (M.A.); (R.T.); (A.A.)
| | - Tamara O. Quran
- Department of Health Management and Policy, Faculty of Medicine, Jordan University of Science and Technology, Irbid 22110, Jordan;
| | - Mohammed ALBashtawy
- Department of Community and Mental Health Nursing, Princess Salma Faculty of Nursing, Al al-Bayt University, Mafraq 25113, Jordan;
| |
Collapse
|
2
|
Hatamoto D, Yamakawa M, Shiina T. Improving ultrasound image classification accuracy of liver tumors using deep learning model with hepatitis virus infection information. J Med Ultrason (2001) 2025:10.1007/s10396-025-01528-1. [PMID: 40205118 DOI: 10.1007/s10396-025-01528-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Accepted: 02/04/2025] [Indexed: 04/11/2025]
Abstract
PURPOSE In recent years, computer-aided diagnosis (CAD) using deep learning methods for medical images has been studied. Although studies have been conducted to classify ultrasound images of tumors of the liver into four categories (liver cysts (Cyst), liver hemangiomas (Hemangioma), hepatocellular carcinoma (HCC), and metastatic liver cancer (Meta)), no studies with additional information for deep learning have been reported. Therefore, we attempted to improve the classification accuracy of ultrasound images of hepatic tumors by adding hepatitis virus infection information to deep learning. METHODS Four combinations of hepatitis virus infection information were assigned to each image, plus or minus HBs antigen and plus or minus HCV antibody, and the classification accuracy was compared before and after the information was input and weighted to fully connected layers. RESULTS With the addition of hepatitis virus infection information, accuracy changed from 0.574 to 0.643. The F1-Score for Cyst, Hemangioma, HCC, and Meta changed from 0.87 to 0.88, 0.55 to 0.57, 0.46 to 0.59, and 0.54 to 0.62, respectively, remaining the same for Hemangioma but increasing for the rest. CONCLUSION Learning hepatitis virus infection information showed the highest increase in the F1-Score for HCC, resulting in improved classification accuracy of ultrasound images of hepatic tumors.
Collapse
Affiliation(s)
- Daisuke Hatamoto
- Graduate School of Medicine, Kyoto University, 53 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8397, Japan.
| | - Makoto Yamakawa
- Graduate School of Medicine, Kyoto University, 53 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8397, Japan
- SIT Research Laboratories, Shibaura Institute of Technology, Tokyo, Japan
| | - Tsuyoshi Shiina
- Graduate School of Medicine, Kyoto University, 53 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8397, Japan
- SIT Research Laboratories, Shibaura Institute of Technology, Tokyo, Japan
| |
Collapse
|
3
|
Zhang Q, Shao D, Lin L, Gong G, Xu R, Kido S, Cui H. Feature Separation in Diffuse Lung Disease Image Classification by Using Evolutionary Algorithm-Based NAS. IEEE J Biomed Health Inform 2025; 29:2706-2717. [PMID: 39405149 DOI: 10.1109/jbhi.2024.3481012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2024]
Abstract
In the field of diagnosing lung diseases, the application of neural networks (NNs) in image classification exhibits significant potential. However, NNs are considered "black boxes," making it difficult to discern their decision-making processes, thereby leading to skepticism and concern regarding NNs. This compromises model reliability and hampers intelligent medicine's development. To tackle this issue, we introduce the Evolutionary Neural Architecture Search (EvoNAS). In image classification tasks, EvoNAS initially utilizes an Evolutionary Algorithm to explore various Convolutional Neural Networks, ultimately yielding an optimized network that excels at separating between redundant texture features and the most discriminative ones. Retaining the most discriminative features improves classification accuracy, particularly in distinguishing similar features. This approach illuminates the intrinsic mechanics of classification, thereby enhancing the accuracy of the results. Subsequently, we incorporate a Differential Evolution algorithm based on distribution estimation, significantly enhancing search efficiency. Employing visualization techniques, we demonstrate the effectiveness of EvoNAS, endowing the model with interpretability. Finally, we conduct experiments on the diffuse lung disease texture dataset using EvoNAS. Compared to the original network, the classification accuracy increases by 0.56%. Moreover, our EvoNAS approach demonstrates significant advantages over existing methods in the same dataset.
Collapse
|
4
|
Shanmugam K, Rajaguru H. Enhanced Superpixel-Guided ResNet Framework with Optimized Deep-Weighted Averaging-Based Feature Fusion for Lung Cancer Detection in Histopathological Images. Diagnostics (Basel) 2025; 15:805. [PMID: 40218155 PMCID: PMC11989018 DOI: 10.3390/diagnostics15070805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2025] [Revised: 03/06/2025] [Accepted: 03/11/2025] [Indexed: 04/14/2025] Open
Abstract
Background/Objectives: Lung cancer is a leading cause of cancer-related mortalities, with early diagnosis crucial for survival. While biopsy is the gold standard, manual histopathological analysis is time-consuming. This research enhances lung cancer diagnosis through deep learning-based feature extraction, fusion, optimization, and classification for improved accuracy and efficiency. Methods: The study begins with image preprocessing using an adaptive fuzzy filter, followed by segmentation with a modified simple linear iterative clustering (SLIC) algorithm. The segmented images are input into deep learning architectures, specifically ResNet-50 (RN-50), ResNet-101 (RN-101), and ResNet-152 (RN-152), for feature extraction. The extracted features are fused using a deep-weighted averaging-based feature fusion (DWAFF) technique, producing ResNet-X (RN-X)-fused features. To further refine these features, particle swarm optimization (PSO) and red deer optimization (RDO) techniques are employed within the selective feature pooling layer. The optimized features are classified using various machine learning classifiers, including support vector machine (SVM), decision tree (DT), random forest (RF), K-nearest neighbor (KNN), SoftMax discriminant classifier (SDC), Bayesian linear discriminant analysis classifier (BLDC), and multilayer perceptron (MLP). A performance evaluation is performed using K-fold cross-validation with K values of 2, 4, 5, 8, and 10. Results: The proposed DWAFF technique, combined with feature selection using RDO and classification with MLP, achieved the highest classification accuracy of 98.68% when using K = 10 for cross-validation. The RN-X features demonstrated superior performance compared to individual ResNet variants, and the integration of segmentation and optimization significantly enhanced classification accuracy. Conclusions: The proposed methodology automates lung cancer classification using deep learning, feature fusion, optimization, and advanced classification techniques. Segmentation and feature selection enhance performance, improving diagnostic accuracy. Future work may explore further optimizations and hybrid models.
Collapse
|
5
|
Hernández-Vázquez N, Santos-Arce SR, Hernández-Gordillo D, Salido-Ruiz RA, Torres-Ramos S, Román-Godínez I. Fibrous Tissue Semantic Segmentation in CT Images of Diffuse Interstitial Lung Disease. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025:10.1007/s10278-025-01420-x. [PMID: 39904943 DOI: 10.1007/s10278-025-01420-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2024] [Revised: 01/14/2025] [Accepted: 01/16/2025] [Indexed: 02/06/2025]
Abstract
Interstitial-lung-disease progression assessment and diagnosis via radiological findings on computed tomography images require significant time and effort from expert physicians. Accurate results from these analyses are critical for treatment decisions. Automatic semantic segmentation of radiological findings has been developed recently using convolutional neural networks (CNN). However, on the one hand, few works present individual performance scores for radiological findings that allow for accurately measuring fibrosis segmentation performances; on the other hand, the poorly annotated quality of available databases may mislead researcher observations. This study presents a CNN methodology employing three different architectures (U-net, LinkNet, and FPN) with transfer learning and data augmentation to enhance the performance in semantic segmentation of fibrosis-related radiological findings (FRF). In addition, considering the poor quality of manual CT tagging on available datasets, we use two alternative evaluation strategies, first using only the fibrosis region of interest. Second, re-tagging and validating the test set by an expert pulmonologist. Using DICOM images from the Interstitial Lung Diseases Database, the implemented approach achieves a Jaccard Score Index of 0.7355 with a standard deviation of 0.0699 and a Dice Similarity Coefficient of 0.8459 with a standard deviation of 0.0470 comparable to state-of-the-art performance in FRF semantic segmentation. Also, a visual evaluation of the images automatically tagged by our proposal was performed by a pulmonologist. Our proposed method successfully identifies these FRF areas, demonstrating its effectiveness. Also, the pulmonologist revealed discrepancies in the dataset tags, indicating deficiencies in FRF annotations.
Collapse
Affiliation(s)
- Natanael Hernández-Vázquez
- División de Tecnologías para la Integración Ciber-Humana, CUCEI-Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Olímpica, 44430, Guadalajara, Jalisco, Mexico
| | - Stewart R Santos-Arce
- División de Tecnologías para la Integración Ciber-Humana, CUCEI-Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Olímpica, 44430, Guadalajara, Jalisco, Mexico
| | - Daniel Hernández-Gordillo
- UMAE, Hospital de Especialidades, CMNO, Av. Belisario Domínguez 1000 Col. Independencia, Guadalajara, 44340, Jalisco, Mexico
| | - Ricardo A Salido-Ruiz
- División de Tecnologías para la Integración Ciber-Humana, CUCEI-Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Olímpica, 44430, Guadalajara, Jalisco, Mexico
| | - Sulema Torres-Ramos
- División de Tecnologías para la Integración Ciber-Humana, CUCEI-Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Olímpica, 44430, Guadalajara, Jalisco, Mexico
| | - Israel Román-Godínez
- División de Tecnologías para la Integración Ciber-Humana, CUCEI-Universidad de Guadalajara, Blvd. Marcelino García Barragán 1421, Olímpica, 44430, Guadalajara, Jalisco, Mexico.
| |
Collapse
|
6
|
Choe J, Hwang HJ, Lee SM, Yoon J, Kim N, Seo JB. CT Quantification of Interstitial Lung Abnormality and Interstitial Lung Disease: From Technical Challenges to Future Directions. Invest Radiol 2025; 60:43-52. [PMID: 39008898 DOI: 10.1097/rli.0000000000001103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/17/2024]
Abstract
ABSTRACT Interstitial lung disease (ILD) encompasses a variety of lung disorders with varying degrees of inflammation or fibrosis, requiring a combination of clinical, imaging, and pathologic data for evaluation. Imaging is essential for the noninvasive diagnosis of the disease, as well as for assessing disease severity, monitoring its progression, and evaluating treatment response. However, traditional visual assessments of ILD with computed tomography (CT) suffer from reader variability. Automated quantitative CT offers a more objective approach by using computer-based analysis to consistently evaluate and measure ILD. Advancements in technology have significantly improved the accuracy and reliability of these measurements. Recently, interstitial lung abnormalities (ILAs), which represent potential preclinical ILD incidentally found on CT scans and are characterized by abnormalities in over 5% of any lung zone, have gained attention and clinical importance. The challenge lies in the accurate and consistent identification of ILA, given that its definition relies on a subjective threshold, making quantitative tools crucial for precise ILA evaluation. This review highlights the state of CT quantification of ILD and ILA, addressing clinical and research disparities while emphasizing how machine learning or deep learning in quantitative imaging can improve diagnosis and management by providing more accurate assessments, and finally, suggests the future directions of quantitative CT in this area.
Collapse
Affiliation(s)
- Jooae Choe
- From the Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.C., H.J.H., S.M.L., J.Y., N.K., J.B.S.); and Department of Convergence Medicine, Biomedical Engineering Research Center, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea (J.Y. and N.K.)
| | | | | | | | | | | |
Collapse
|
7
|
Kavousinejad S, Ebadifar A, Tehranchi A, Zakermashhadi F, Dalaie K. Determination of cervical vertebral maturation using machine learning in lateral cephalograms. J Dent Res Dent Clin Dent Prospects 2024; 18:232-241. [PMID: 39895683 PMCID: PMC11786010 DOI: 10.34172/joddd.41114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Accepted: 10/25/2024] [Indexed: 02/04/2025] Open
Abstract
Background The accurate timing of growth modification treatments is crucial for optimal results in orthodontics. However, traditional methods for assessing growth status, such as hand-wrist radiographs and subjective interpretation of lateral cephalograms, have limitations. This study aimed to develop a semi-automated approach using machine learning based on cervical vertebral dimensions (CVD) for determining skeletal maturation status. Methods A dataset comprising 980 lateral cephalograms was collected from the Department of Orthodontics, Shahid Beheshti Dental School in Tehran, Iran. Eight landmarks representing the corners of the third and fourth cervical vertebrae were selected. A ratio-based approach was employed to compute the values of C3 and C4, accompanied by the implementation of an auto_error_reduction (AER) function to enhance the accuracy of landmark selection. Linear distances and ratios were measured using the dedicated software. A novel data augmentation technique was applied to expand the dataset. Subsequently, a stacking model was developed, trained on the augmented dataset, and evaluated using a separate test set of 196 cephalograms. Results The proposed model achieved an accuracy of 99.49% and demonstrated a loss of 0.003 on the test set. Conclusion By employing feature engineering, simplified landmark selection, AER function, data augmentation, and eliminating gender and age features, a model was developed for accurate assessment of skeletal maturation for clinical applications.
Collapse
Affiliation(s)
- Shahab Kavousinejad
- Dentofacial Deformities Research Center, Research Institute for Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Asghar Ebadifar
- Dentofacial Deformities Research Center, Research Institute for Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Azita Tehranchi
- Dentofacial Deformities Research Center, Research Institute for Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Farzan Zakermashhadi
- School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Kazem Dalaie
- Dentofacial Deformities Research Center, Research Institute for Dental Sciences, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
- Department of Orthodontics, School of Dentistry, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
8
|
Agarwal S, Arya KV, Kumar Meena Y. CNN-O-ELMNet: Optimized Lightweight and Generalized Model for Lung Disease Classification and Severity Assessment. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:4200-4210. [PMID: 38896522 DOI: 10.1109/tmi.2024.3416744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2024]
Abstract
The high burden of lung diseases on healthcare necessitates effective detection methods. Current Computer-aided design (CAD) systems are limited by their focus on specific diseases and computationally demanding deep learning models. To overcome these challenges, we introduce CNN-O-ELMNet, a lightweight classification model designed to efficiently detect various lung diseases, surpassing the limitations of disease-specific CAD systems and the complexity of deep learning models. This model combines a convolutional neural network for deep feature extraction with an optimized extreme learning machine, utilizing the imperialistic competitive algorithm for enhanced predictions. We then evaluated the effectiveness of CNN-O-ELMNet using benchmark datasets for lung diseases: distinguishing pneumothorax vs. non-pneumothorax, tuberculosis vs. normal, and lung cancer vs. healthy cases. Our findings demonstrate that CNN-O-ELMNet significantly outperformed (p < 0.05) state-of-the-art methods in binary classifications for tuberculosis and cancer, achieving accuracies of 97.85% and 97.7%, respectively, while maintaining low computational complexity with only 2481 trainable parameters.We also extended themodel to categorize lung disease severity based on Brixia scores. Achieving a 96.2% accuracy in multi-class assessment for mild, moderate, and severe cases, makes it suitable for deployment in lightweight healthcare devices.
Collapse
|
9
|
Djahnine A, Jupin-Delevaux E, Nempont O, Si-Mohamed SA, Craighero F, Cottin V, Douek P, Popoff A, Boussel L. Weakly-supervised learning-based pathology detection and localization in 3D chest CT scans. Med Phys 2024; 51:8272-8282. [PMID: 39140793 DOI: 10.1002/mp.17302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2023] [Revised: 05/24/2024] [Accepted: 07/01/2024] [Indexed: 08/15/2024] Open
Abstract
BACKGROUND Recent advancements in anomaly detection have paved the way for novel radiological reading assistance tools that support the identification of findings, aimed at saving time. The clinical adoption of such applications requires a low rate of false positives while maintaining high sensitivity. PURPOSE In light of recent interest and development in multi pathology identification, we present a novel method, based on a recent contrastive self-supervised approach, for multiple chest-related abnormality identification including low lung density area ("LLDA"), consolidation ("CONS"), nodules ("NOD") and interstitial pattern ("IP"). Our approach alerts radiologists about abnormal regions within a computed tomography (CT) scan by providing 3D localization. METHODS We introduce a new method for the classification and localization of multiple chest pathologies in 3D Chest CT scans. Our goal is to distinguish four common chest-related abnormalities: "LLDA", "CONS", "NOD", "IP" and "NORMAL". This method is based on a 3D patch-based classifier with a Resnet backbone encoder pretrained leveraging recent contrastive self supervised approach and a fine-tuned classification head. We leverage the SimCLR contrastive framework for pretraining on an unannotated dataset of randomly selected patches and we then fine-tune it on a labeled dataset. During inference, this classifier generates probability maps for each abnormality across the CT volume, which are aggregated to produce a multi-label patient-level prediction. We compare different training strategies, including random initialization, ImageNet weight initialization, frozen SimCLR pretrained weights and fine-tuned SimCLR pretrained weights. Each training strategy is evaluated on a validation set for hyperparameter selection and tested on a test set. Additionally, we explore the fine-tuned SimCLR pretrained classifier for 3D pathology localization and conduct qualitative evaluation. RESULTS Validated on 111 chest scans for hyperparameter selection and subsequently tested on 251 chest scans with multi-abnormalities, our method achieves an AUROC of 0.931 (95% confidence interval [CI]: [0.9034, 0.9557], p $ p$ -value < 0.001) and 0.963 (95% CI: [0.952, 0.976], p $ p$ -value < 0.001) in the multi-label and binary (i.e., normal versus abnormal) settings, respectively. Notably, our method surpasses the area under the receiver operating characteristic (AUROC) threshold of 0.9 for two abnormalities: IP (0.974) and LLDA (0.952), while achieving values of 0.853 and 0.791 for NOD and CONS, respectively. Furthermore, our results highlight the superiority of incorporating contrastive pretraining within the patch classifier, outperforming Imagenet pretraining weights and non-pretrained counterparts with uninitialized weights (F1 score = 0.943, 0.792, and 0.677 respectively). Qualitatively, the method achieved a satisfactory 88.8% completeness rate in localization and maintained an 88.3% accuracy rate against false positives. CONCLUSIONS The proposed method integrates self-supervised learning algorithms for pretraining, utilizes a patch-based approach for 3D pathology localization and develops an aggregation method for multi-label prediction at patient-level. It shows promise in efficiently detecting and localizing multiple anomalies within a single scan.
Collapse
Affiliation(s)
- Aissam Djahnine
- CREATIS UMR5220, INSERM U1044, Claude Bernard University Lyon 1, INSA, Lyon, France
- Philips Health Technology innovation, Paris, France
| | | | | | - Salim Aymeric Si-Mohamed
- CREATIS UMR5220, INSERM U1044, Claude Bernard University Lyon 1, INSA, Lyon, France
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
| | | | - Vincent Cottin
- National Reference Center for Rare Pulmonary Diseases, Louis Pradel Hospital, Lyon, France
- Claude Bernard University Lyon 1, Lyon, France
| | - Philippe Douek
- CREATIS UMR5220, INSERM U1044, Claude Bernard University Lyon 1, INSA, Lyon, France
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
| | | | - Loic Boussel
- CREATIS UMR5220, INSERM U1044, Claude Bernard University Lyon 1, INSA, Lyon, France
- Department of Radiology, Hospices Civils de Lyon, Lyon, France
| |
Collapse
|
10
|
Kuang Q, Feng B, Xu K, Chen Y, Chen X, Duan X, Lei X, Chen X, Li K, Long W. Multimodal deep learning radiomics model for predicting postoperative progression in solid stage I non-small cell lung cancer. Cancer Imaging 2024; 24:140. [PMID: 39420411 PMCID: PMC11487701 DOI: 10.1186/s40644-024-00783-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 09/30/2024] [Indexed: 10/19/2024] Open
Abstract
PURPOSE To explore the application value of a multimodal deep learning radiomics (MDLR) model in predicting the risk status of postoperative progression in solid stage I non-small cell lung cancer (NSCLC). MATERIALS AND METHODS A total of 459 patients with histologically confirmed solid stage I NSCLC who underwent surgical resection in our institution from January 2014 to September 2019 were reviewed retrospectively. At another medical center, 104 patients were reviewed as an external validation cohort according to the same criteria. A univariate analysis was conducted on the clinicopathological characteristics and subjective CT findings of the progression and non-progression groups. The clinicopathological characteristics and subjective CT findings that exhibited significant differences were used as input variables for the extreme learning machine (ELM) classifier to construct the clinical model. We used the transfer learning strategy to train the ResNet18 model, used the model to extract deep learning features from all CT images, and then used the ELM classifier to classify the deep learning features to obtain the deep learning signature (DLS). A MDLR model incorporating clinicopathological characteristics, subjective CT findings and DLS was constructed. The diagnostic efficiencies of the clinical model, DLS model and MDLR model were evaluated by the area under the curve (AUC). RESULTS Univariate analysis indicated that size (p = 0.004), neuron-specific enolase (NSE) (p = 0.03), carbohydrate antigen 19 - 9 (CA199) (p = 0.003), and pathological stage (p = 0.027) were significantly associated with the progression of solid stage I NSCLC after surgery. Therefore, these clinical characteristics were incorporated into the clinical model to predict the risk of progression in postoperative solid-stage NSCLC patients. A total of 294 deep learning features with nonzero coefficients were selected. The DLS in the progressive group was (0.721 ± 0.371), which was higher than that in the nonprogressive group (0.113 ± 0.350) (p < 0.001). The combination of size、NSE、CA199、pathological stage and DLS demonstrated the superior performance in differentiating postoperative progression status. The AUC of the MDLR model was 0.885 (95% confidence interval [CI]: 0.842-0.927), higher than that of the clinical model (0.675 (95% CI: 0.599-0.752)) and DLS model (0.882 (95% CI: 0.835-0.929)). The DeLong test and decision in curve analysis revealed that the MDLR model was the most predictive and clinically useful model. CONCLUSION MDLR model is effective in predicting the risk of postoperative progression of solid stage I NSCLC, and it is helpful for the treatment and follow-up of solid stage I NSCLC patients.
Collapse
Affiliation(s)
- Qionglian Kuang
- Department of Radiology, Hainan General Hospital, 19#, Xiuhua Road, Xiuying District, Haikou, Hainan Province, 570311, PR China
| | - Bao Feng
- Laboratory of Artificial Intelligence of Biomedicine, Guilin University of Aerospace Technology, Guilin City, Guangxi Province, 541004, China
| | - Kuncai Xu
- Laboratory of Artificial Intelligence of Biomedicine, Guilin University of Aerospace Technology, Guilin City, Guangxi Province, 541004, China
| | - Yehang Chen
- Laboratory of Artificial Intelligence of Biomedicine, Guilin University of Aerospace Technology, Guilin City, Guangxi Province, 541004, China
| | - Xiaojuan Chen
- Department of Radiology, Jiangmen Central Hospital, 23#, North Road, Pengjiang Zone, Jiangmen, Guangdong Province, 529030, PR China
| | - Xiaobei Duan
- Department of Nuclear Medicine, Jiangmen Central Hospital, Jiangmen, Guangdong Province, 529030, PR China
| | - Xiaoyan Lei
- Department of Radiology, Hainan General Hospital, 19#, Xiuhua Road, Xiuying District, Haikou, Hainan Province, 570311, PR China
| | - Xiangmeng Chen
- Department of Radiology, Jiangmen Central Hospital, 23#, North Road, Pengjiang Zone, Jiangmen, Guangdong Province, 529030, PR China.
| | - Kunwei Li
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, Guangdong Province, 519000, PR China.
| | - Wansheng Long
- Department of Radiology, Jiangmen Central Hospital, 23#, North Road, Pengjiang Zone, Jiangmen, Guangdong Province, 529030, PR China.
| |
Collapse
|
11
|
Zhao J, Long Y, Li S, Li X, Zhang Y, Hu J, Han L, Ren L. Use of artificial intelligence algorithms to analyse systemic sclerosis-interstitial lung disease imaging features. Rheumatol Int 2024; 44:2027-2041. [PMID: 39207588 PMCID: PMC11393027 DOI: 10.1007/s00296-024-05681-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Accepted: 08/04/2024] [Indexed: 09/04/2024]
Abstract
The use of artificial intelligence (AI) in high-resolution computed tomography (HRCT) for diagnosing systemic sclerosis-associated interstitial lung disease (SSc-ILD) is relatively limited. This study aimed to analyse lung HRCT images of patients with systemic sclerosis with interstitial lung disease (SSc-ILD) using artificial intelligence (AI), conduct correlation analysis with clinical manifestations and prognosis, and explore the features and prognosis of SSc-ILD. Overall, 72 lung HRCT images and clinical data of 58 patients with SSC-ILD were collected. ILD lesion type, location, and volume on HRCT images were identified and evaluated using AI. The imaging characteristics of diffuse SSC (dSSc)-ILD and limited SSc-ILD (lSSc-ILD) were statistically analysed. Furthermore, the correlations between lesion type, clinical indicators, and prognosis were investigated. dSSc and lSSc were more prevalent in patients with a disease duration of < 1 and ≥ 5 years, respectively. SSc-ILD mainly comprises non-specific interstitial pneumonia (NSIP), usual interstitial pneumonia (UIP), and unclassifiable idiopathic interstitial pneumonia. HRCT reveals various lesion types in the early stages of the disease, with an increase in the number of lesion types as the disease progresses. Lesions appearing as grid, ground-glass, and nodular shadows were dispersed throughout both lungs, while those appearing as consolidation shadows and honeycomb were distributed across the lungs. Ground-glass opacity lesion type was absent on HRCT images of patients with SSc-ILD and pulmonary hypertension. This study showed that AI can efficiently analyse imaging characteristics of SSc-ILD, demonstrating its potential to learn from complex images with high generalisation ability.
Collapse
Affiliation(s)
- Jing Zhao
- Department of Rheumatology, People's Hospital of Xiangxi Tujia and Miao Autonomous Prefecture (The First Affiliated Hospital of Jishou University), Intersection of Shiji Avenue and Jianxin Road, Jishou, 416000, Hunan, People's Republic of China
| | - Ying Long
- Department of Rheumatology, Xiangya Hospital of Central South University, Changsha, People's Republic of China
- Provincial Clinical Research Center for Rheumatic and Immunologic Diseases, Xiangya Hospital of Central South University, Changsha, People's Republic of China
| | - Shengtao Li
- Department of Urology, People's Hospital of Xiangxi Tujia and Miao Autonomous Prefecture (The First Affiliated Hospital of Jishou University), Jishou, 416000, Hunan, People's Republic of China
| | - Xiaozhen Li
- Department of Rheumatology, People's Hospital of Xiangxi Tujia and Miao Autonomous Prefecture (The First Affiliated Hospital of Jishou University), Intersection of Shiji Avenue and Jianxin Road, Jishou, 416000, Hunan, People's Republic of China
| | - Yi Zhang
- Department of Rheumatology, People's Hospital of Xiangxi Tujia and Miao Autonomous Prefecture (The First Affiliated Hospital of Jishou University), Intersection of Shiji Avenue and Jianxin Road, Jishou, 416000, Hunan, People's Republic of China
| | - Juan Hu
- Department of Rheumatology, People's Hospital of Xiangxi Tujia and Miao Autonomous Prefecture (The First Affiliated Hospital of Jishou University), Intersection of Shiji Avenue and Jianxin Road, Jishou, 416000, Hunan, People's Republic of China
| | - Lin Han
- Department of Imaging, People's Hospital of Xiangxi Tujia and Miao Autonomous Prefecture (The First Affiliated Hospital of Jishou University), Jishou, 416000, Hunan, People's Republic of China
| | - Li Ren
- Department of Rheumatology, People's Hospital of Xiangxi Tujia and Miao Autonomous Prefecture (The First Affiliated Hospital of Jishou University), Intersection of Shiji Avenue and Jianxin Road, Jishou, 416000, Hunan, People's Republic of China.
| |
Collapse
|
12
|
Choi G, Ham S, Je BK, Rhie YJ, Ahn KS, Shim E, Lee MJ. Olecranon bone age assessment in puberty using a lateral elbow radiograph and a deep-learning model. Eur Radiol 2024; 34:6396-6406. [PMID: 38676732 DOI: 10.1007/s00330-024-10748-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Revised: 02/28/2024] [Accepted: 03/21/2024] [Indexed: 04/29/2024]
Abstract
OBJECTIVES To improve pubertal bone age (BA) evaluation by developing a precise and practical elbow BA classification using the olecranon, and a deep-learning AI model. MATERIALS AND METHODS Lateral elbow radiographs taken for BA evaluation in children under 18 years were collected from January 2020 to June 2022, retrospectively. A novel classification and the olecranon BA were established based on the morphological changes in the olecranon ossification process during puberty. The olecranon BA was compared with other elbow and hand BA methods, using intraclass correlation coefficients (ICCs), and a deep-learning AI model was developed. RESULTS A total of 3508 lateral elbow radiographs (mean age 9.8 ± 1.8 years) were collected. The olecranon BA showed the highest applicability (100%) and interobserver agreement (ICC 0.993) among elbow BA methods. It showed excellent reliability with Sauvegrain (0.967 in girls, 0.969 in boys) and Dimeglio (0.978 in girls, 0.978 in boys) elbow BA methods, as well as Korean standard (KS) hand BA in boys (0.917), and good reliability with KS in girls (0.896) and Greulich-Pyle (GP)/Tanner-Whitehouse (TW)3 (0.835 in girls, 0.895 in boys) hand BA methods. The AI model for olecranon BA showed an accuracy of 0.96 and a specificity of 0.98 with EfficientDet-b4. External validation showed an accuracy of 0.86 and a specificity of 0.91. CONCLUSION The olecranon BA evaluation for puberty, requiring only a lateral elbow radiograph, showed the highest applicability and interobserver agreement, and excellent reliability with other BA evaluation methods, along with a high performance of the AI model. CLINICAL RELEVANCE STATEMENT This AI model uses a single lateral elbow radiograph to determine bone age for puberty from the olecranon ossification center and can improve pubertal bone age assessment with the highest applicability and excellent reliability compared to previous methods. KEY POINTS Elbow bone age is valuable for pubertal bone age assessment, but conventional methods have limitations. Olecranon bone age and its AI model showed high performances for pubertal bone age assessment. Olecranon bone age system is practical and accurate while requiring only a single lateral elbow radiograph.
Collapse
Affiliation(s)
- Gayoung Choi
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, Seoul, Korea
| | - Sungwon Ham
- Healthcare Readiness Institute for Unified Korea, Korea University Ansan Hospital, Korea University College of Medicine, Seoul, Korea
| | - Bo-Kyung Je
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, Seoul, Korea.
| | - Young-Jun Rhie
- Department of Pediatrics, Korea University Ansan Hospital, Korea University College of Medicine, Seoul, Korea
| | - Kyung-Sik Ahn
- Department of Radiology, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Korea
| | - Euddeum Shim
- Department of Radiology, Korea University Ansan Hospital, Korea University College of Medicine, Seoul, Korea
| | - Mi-Jung Lee
- Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea
| |
Collapse
|
13
|
Amorim FG, Dos Santos ER, Yuji Verrastro CG, Kayser C. Quantitative chest computed tomography predicts mortality in systemic sclerosis: A longitudinal study. PLoS One 2024; 19:e0310892. [PMID: 39331602 PMCID: PMC11432915 DOI: 10.1371/journal.pone.0310892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Accepted: 09/08/2024] [Indexed: 09/29/2024] Open
Abstract
OBJECTIVE Quantitative chest computed tomography (qCT) methods are new tools that objectively measure parenchymal abnormalities and vascular features on CT images in patients with interstitial lung disease (ILD). We aimed to investigate whether the qCT measures are predictors of 5-year mortality in patients with systemic sclerosis (SSc). METHODS Patients diagnosed with SSc were retrospectively selected from 2011 to 2022. Patients should have had volumetric high-resolution CTs (HRCTs) and pulmonary function tests (PFTs) performed at baseline and at 24 months of follow-up. The following parameters were evaluated in HRCTs using Computer-Aided Lung Informatics for Pathology Evaluation and Rating (CALIPER): ground glass opacities, reticular pattern, honeycombing, and pulmonary vascular volume. Factors associated with death were evaluated by Kaplan‒Meier survival curves and multivariate analysis models. Semiquantitative analysis of the HRCTs images was also performed. RESULTS Seventy-one patients were included (mean age, 54.2 years). Eleven patients (15.49%) died during the follow-up, and all patients had ILD. As shown by Kaplan‒Meier curves, survival was worse among patients with an ILD extent (ground glass opacities + reticular pattern + honeycombing) ≥ 6.32%, a reticular pattern ≥ 1.41% and a forced vital capacity (FVC) < 70% at baseline. The independent predictors of mortality by multivariate analysis were a higher reticular pattern (Exp 2.70, 95%CI 1.26-5.82) on qCT at baseline, younger age (Exp 0.906, 95%CI 0.826-0.995), and absolute FVC decline ≥ 5% at follow-up (Exp 15.01, 95%CI 1.90-118.5), but not baseline FVC. Patients with extensive disease (>20% extension) by semiquantitative analysis according to Goh's staging system had higher disease extension on qCT at baseline and follow-up. CONCLUSION This study showed that the reticular pattern assessed by baseline qCT may be a useful tool in the clinical practice for assessing lung damage and predicting mortality in SSc.
Collapse
Affiliation(s)
- Fernanda Godinho Amorim
- Rheumatology Division, Escola Paulista de Medicina, Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Ernandez Rodrigues Dos Santos
- Department of Radiology, Escola Paulista de Medicina, Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Carlos Gustavo Yuji Verrastro
- Department of Radiology, Escola Paulista de Medicina, Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Cristiane Kayser
- Rheumatology Division, Escola Paulista de Medicina, Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| |
Collapse
|
14
|
Liu Z, Yin R, Ma W, Li Z, Guo Y, Wu H, Lin Y, Chekhonin VP, Peltzer K, Li H, Mao M, Jian X, Zhang C. Bone metastasis prediction in non-small-cell lung cancer: primary CT-based radiomics signature and clinical feature. BMC Med Imaging 2024; 24:203. [PMID: 39103775 DOI: 10.1186/s12880-024-01383-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Accepted: 07/29/2024] [Indexed: 08/07/2024] Open
Abstract
BACKGROUND Radiomics provided opportunities to quantify the tumor phenotype non-invasively. This study extracted contrast-enhanced computed tomography (CECT) radiomic signatures and evaluated clinical features of bone metastasis in non-small-cell lung cancer (NSCLC). With the combination of the revealed radiomics and clinical features, the predictive modeling on bone metastasis in NSCLC was established. METHODS A total of 318 patients with NSCLC at the Tianjin Medical University Cancer Institute & Hospital was enrolled between January 2009 and December 2019, which included a feature-learning cohort (n = 223) and a validation cohort (n = 95). We trained a radiomics model in 318 CECT images from feature-learning cohort to extract the radiomics features of bone metastasis in NSCLC. The Kruskal-Wallis and the least absolute shrinkage and selection operator regression (LASSO) were used to select bone metastasis-related features and construct the CT radiomics score (Rad-score). Multivariate logistic regression was performed with the combination of the Rad-score and clinical data. A predictive nomogram was subsequently developed. RESULTS Radiomics models using CECT scans were significant on bone metastasis prediction in NSCLC. Model performance was enhanced with each information into the model. The radiomics nomogram achieved an AUC of 0.745 (95% confidence interval [CI]: 0.68,0.80) on predicting bone metastasis in the training set and an AUC of 0.808(95% confidence interval [CI]: 0.71,0.88) in the validation set. CONCLUSION The revealed invisible image features were of significance on guiding bone metastasis prediction in NSCLC. Based on the combination of the image features and clinical characteristics, the predictive nomogram was established. Such nomogram can be used for the auxiliary screening of bone metastasis in NSCLC.
Collapse
Affiliation(s)
- Zheng Liu
- Department of Bone and Soft Tissue Tumor, Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Tianjin, China
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
- Department of Orthopedics, The Seventh Affiliated Hospital, Sun Yat-sen University, Shenzhen, Guangdong province, China
| | - Rui Yin
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
- School of Biomedical Engineering & Technology, Tianjin Medical University, Tianjin, China
| | - Wenjuan Ma
- Department of Bone and Soft Tissue Tumor, Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Tianjin, China
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
| | - Zhijun Li
- Department of Bone and Soft Tissue Tumor, Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Tianjin, China
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
| | - Yijun Guo
- Department of Bone and Soft Tissue Tumor, Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Tianjin, China
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
| | - Haixiao Wu
- Department of Bone and Soft Tissue Tumor, Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Tianjin, China
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
| | - Yile Lin
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
- Department of Orthopedics, Tianjin Medical University General Hospital, Tianjin, China
| | - Vladimir P Chekhonin
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
- Department of Basic and Applied Neurobiology, Federal Medical Research Center for Psychiatry and Narcology, Moscow, Russian Federation
| | - Karl Peltzer
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
- Department of Psychology, University of the Free State, Turfloop, South Africa
| | - Huiyang Li
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China
| | - Min Mao
- Department of Pathology and Southwest Cancer Center, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - Xiqi Jian
- School of Biomedical Engineering & Technology, Tianjin Medical University, Tianjin, China.
| | - Chao Zhang
- Department of Bone and Soft Tissue Tumor, Key Laboratory of Cancer Prevention and Therapy, Tianjin Medical University Cancer Institute and Hospital, National Clinical Research Center for Cancer, Tianjin's Clinical Research Center for Cancer, Tianjin, China.
- The Sino-Russian Joint Research Center for Bone Metastasis in Malignant Tumor, Tianjin, China.
| |
Collapse
|
15
|
Ye RZ, Lipatov K, Diedrich D, Bhattacharyya A, Erickson BJ, Pickering BW, Herasevich V. Automatic ARDS surveillance with chest X-ray recognition using convolutional neural networks. J Crit Care 2024; 82:154794. [PMID: 38552452 DOI: 10.1016/j.jcrc.2024.154794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/20/2023] [Accepted: 12/01/2023] [Indexed: 06/01/2024]
Abstract
OBJECTIVE This study aims to design, validate and assess the accuracy a deep learning model capable of differentiation Chest X-Rays between pneumonia, acute respiratory distress syndrome (ARDS) and normal lungs. MATERIALS AND METHODS A diagnostic performance study was conducted using Chest X-Ray images from adult patients admitted to a medical intensive care unit between January 2003 and November 2014. X-ray images from 15,899 patients were assigned one of three prespecified categories: "ARDS", "Pneumonia", or "Normal". RESULTS A two-step convolutional neural network (CNN) pipeline was developed and tested to distinguish between the three patterns with sensitivity ranging from 91.8% to 97.8% and specificity ranging from 96.6% to 98.8%. The CNN model was validated with a sensitivity of 96.3% and specificity of 96.6% using a previous dataset of patients with Acute Lung Injury (ALI)/ARDS. DISCUSSION The results suggest that a deep learning model based on chest x-ray pattern recognition can be a useful tool in distinguishing patients with ARDS from patients with normal lungs, providing faster results than digital surveillance tools based on text reports. CONCLUSION A CNN-based deep learning model showed clinically significant performance, providing potential for faster ARDS identification. Future research should prospectively evaluate these tools in a clinical setting.
Collapse
Affiliation(s)
- Run Zhou Ye
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA.; Division of Endocrinology, Department of Medicine, Centre de Recherche du CHUS, Sherbrooke QC J1H 5N4, Canada
| | - Kirill Lipatov
- Critical Care Medicine, Mayo Clinic, Eau Claire, WI, United States
| | - Daniel Diedrich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | | | - Bradley J Erickson
- Department of Diagnostic Radiology, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | - Brian W Pickering
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA
| | - Vitaly Herasevich
- Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, 200 First Street Southwest, Rochester, MN 55905, USA..
| |
Collapse
|
16
|
Pugashetti JV, Khanna D, Kazerooni EA, Oldham J. Clinically Relevant Biomarkers in Connective Tissue Disease-Associated Interstitial Lung Disease. Rheum Dis Clin North Am 2024; 50:439-461. [PMID: 38942579 DOI: 10.1016/j.rdc.2024.03.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/30/2024]
Abstract
Interstitial lung disease (ILD) complicates connective tissue disease (CTD) with variable incidence and is a leading cause of death in these patients. To improve CTD-ILD outcomes, early recognition and management of ILD is critical. Blood-based and radiologic biomarkers that assist in the diagnosis CTD-ILD have long been studied. Recent studies, including -omic investigations, have also begun to identify biomarkers that may help prognosticate such patients. This review provides an overview of clinically relevant biomarkers in patients with CTD-ILD, highlighting recent advances to assist in the diagnosis and prognostication of CTD-ILD.
Collapse
Affiliation(s)
- Janelle Vu Pugashetti
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, University of Michigan.
| | - Dinesh Khanna
- Scleroderma Program, Division of Rheumatology, Department of Internal Medicine, University of Michigan
| | - Ella A Kazerooni
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, University of Michigan; Division of Cardiothoracic Radiology, Department of Radiology, University of Michigan
| | - Justin Oldham
- Division of Pulmonary and Critical Care Medicine, Department of Internal Medicine, University of Michigan; Department of Epidemiology, University of Michigan
| |
Collapse
|
17
|
Junjun R, Zhengqian Z, Ying W, Jialiang W, Yongzhuang L. A comprehensive review of deep learning-based variant calling methods. Brief Funct Genomics 2024; 23:303-313. [PMID: 38366908 DOI: 10.1093/bfgp/elae003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 01/14/2024] [Accepted: 01/18/2023] [Indexed: 02/18/2024] Open
Abstract
Genome sequencing data have become increasingly important in the field of personalized medicine and diagnosis. However, accurately detecting genomic variations remains a challenging task. Traditional variation detection methods rely on manual inspection or predefined rules, which can be time-consuming and prone to errors. Consequently, deep learning-based approaches for variation detection have gained attention due to their ability to automatically learn genomic features that distinguish between variants. In our review, we discuss the recent advancements in deep learning-based algorithms for detecting small variations and structural variations in genomic data, as well as their advantages and limitations.
Collapse
Affiliation(s)
- Ren Junjun
- Harbin Institute of Technology, School of Computer Science and Technology, Harbin 150001, China
| | - Zhang Zhengqian
- Harbin Institute of Technology, School of Computer Science and Technology, Harbin 150001, China
| | - Wu Ying
- Harbin Institute of Technology, School of Computer Science and Technology, Harbin 150001, China
| | - Wang Jialiang
- Harbin Institute of Technology, School of Computer Science and Technology, Harbin 150001, China
| | - Liu Yongzhuang
- Harbin Institute of Technology, School of Computer Science and Technology, Harbin 150001, China
| |
Collapse
|
18
|
Huang X, Wang L, Jiang S, Xu L. DHAFormer: Dual-channel hybrid attention network with transformer for polyp segmentation. PLoS One 2024; 19:e0306596. [PMID: 38985710 PMCID: PMC11236112 DOI: 10.1371/journal.pone.0306596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Accepted: 06/17/2024] [Indexed: 07/12/2024] Open
Abstract
The accurate early diagnosis of colorectal cancer significantly relies on the precise segmentation of polyps in medical images. Current convolution-based and transformer-based segmentation methods show promise but still struggle with the varied sizes and shapes of polyps and the often low contrast between polyps and their background. This research introduces an innovative approach to confronting the aforementioned challenges by proposing a Dual-Channel Hybrid Attention Network with Transformer (DHAFormer). Our proposed framework features a multi-scale channel fusion module, which excels at recognizing polyps across a spectrum of sizes and shapes. Additionally, the framework's dual-channel hybrid attention mechanism is innovatively conceived to reduce background interference and improve the foreground representation of polyp features by integrating local and global information. The DHAFormer demonstrates significant improvements in the task of polyp segmentation compared to currently established methodologies.
Collapse
Affiliation(s)
- Xuejie Huang
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
| | - Liejun Wang
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
| | - Shaochen Jiang
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
| | - Lianghui Xu
- School of Computer Science and Technology, Xinjiang University, Urumqi, China
| |
Collapse
|
19
|
Guo K, Chen T, Ren S, Li N, Hu M, Kang J. Federated Learning Empowered Real-Time Medical Data Processing Method for Smart Healthcare. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:869-879. [PMID: 35737631 DOI: 10.1109/tcbb.2022.3185395] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Computer-aided diagnosis (CAD) has always been an important research topic for applying artificial intelligence in smart healthcare. Sufficient medical data are one of the most critical factors in CAD research. However, medical data are usually obtained in chronological order and cannot be collected all at once, which poses difficulties for the application of deep learning technology in the medical field. The traditional batch learning method consumes considerable time and space resources for real-time medical data, and the incremental learning method often leads to catastrophic forgetting. To solve these problems, we propose a real-time medical data processing method based on federated learning. We divide the process into the model stage and the exemplar stage. In the model stage, we use the federated learning method to fuse the old and new models to mitigate the catastrophic forgetting problem of the new model. In the exemplar stage, we use the most representative exemplars selected from the old data to help the new model review the old knowledge, which further mitigates the catastrophic forgetting problem of the new model. We use this method to conduct experiments on a simulated medical real-time data stream. The experimental results show that our method can learn a disease diagnosis model from a continuous medical real-time data stream. As the amount of data increases, the performance of the disease diagnosis model continues to improve, and the catastrophic forgetting problem has been effectively mitigated. Compared with the traditional batch learning method, our method can significantly save time and space resources.
Collapse
|
20
|
Pathan RK, Shorna IJ, Hossain MS, Khandaker MU, Almohammed HI, Hamd ZY. The efficacy of machine learning models in lung cancer risk prediction with explainability. PLoS One 2024; 19:e0305035. [PMID: 38870229 PMCID: PMC11175504 DOI: 10.1371/journal.pone.0305035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Accepted: 05/22/2024] [Indexed: 06/15/2024] Open
Abstract
Among many types of cancers, to date, lung cancer remains one of the deadliest cancers around the world. Many researchers, scientists, doctors, and people from other fields continuously contribute to this subject regarding early prediction and diagnosis. One of the significant problems in prediction is the black-box nature of machine learning models. Though the detection rate is comparatively satisfactory, people have yet to learn how a model came to that decision, causing trust issues among patients and healthcare workers. This work uses multiple machine learning models on a numerical dataset of lung cancer-relevant parameters and compares performance and accuracy. After comparison, each model has been explained using different methods. The main contribution of this research is to give logical explanations of why the model reached a particular decision to achieve trust. This research has also been compared with a previous study that worked with a similar dataset and took expert opinions regarding their proposed model. We also showed that our research achieved better results than their proposed model and specialist opinion using hyperparameter tuning, having an improved accuracy of almost 100% in all four models.
Collapse
Affiliation(s)
- Refat Khan Pathan
- Department of Computing and Information Systems, School of Engineering and Technology, Sunway University, Selangor, Malaysia
| | | | - Md. Sayem Hossain
- School of Computing Science, Faculty of Innovation and Technology, Taylor’s University Lakeside Campus, Selangor, Malaysia
| | - Mayeen Uddin Khandaker
- Applied Physics and Radiation Technologies Group, CCDCU, School of Engineering and Technology, Sunway University, Selangor, Malaysia
- Faculty of Graduate Studies, Daffodil International University, Daffodil Smart City, Savar, Dhaka, Bangladesh
| | - Huda I. Almohammed
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Zuhal Y. Hamd
- Department of Radiological Sciences, College of Health and Rehabilitation Sciences, Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia
| |
Collapse
|
21
|
Chen JX, Shen YC, Peng SL, Chen YW, Fang HY, Lan JL, Shih CT. Pattern classification of interstitial lung diseases from computed tomography images using a ResNet-based network with a split-transform-merge strategy and split attention. Phys Eng Sci Med 2024; 47:755-767. [PMID: 38436886 DOI: 10.1007/s13246-024-01404-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Accepted: 02/09/2024] [Indexed: 03/05/2024]
Abstract
In patients with interstitial lung disease (ILD), accurate pattern assessment from their computed tomography (CT) images could help track lung abnormalities and evaluate treatment efficacy. Based on excellent image classification performance, convolutional neural networks (CNNs) have been massively investigated for classifying and labeling pathological patterns in the CT images of ILD patients. However, previous studies rarely considered the three-dimensional (3D) structure of the pathological patterns of ILD and used two-dimensional network input. In addition, ResNet-based networks such as SE-ResNet and ResNeXt with high classification performance have not been used for pattern classification of ILD. This study proposed a SE-ResNeXt-SA-18 for classifying pathological patterns of ILD. The SE-ResNeXt-SA-18 integrated the multipath design of the ResNeXt and the feature weighting of the squeeze-and-excitation network with split attention. The classification performance of the SE-ResNeXt-SA-18 was compared with the ResNet-18 and SE-ResNeXt-18. The influence of the input patch size on classification performance was also evaluated. Results show that the classification accuracy was increased with the increase of the patch size. With a 32 × 32 × 16 input, the SE-ResNeXt-SA-18 presented the highest performance with average accuracy, sensitivity, and specificity of 0.991, 0.979, and 0.994. High-weight regions in the class activation maps of the SE-ResNeXt-SA-18 also matched the specific pattern features. In comparison, the performance of the SE-ResNeXt-SA-18 is superior to the previously reported CNNs in classifying the ILD patterns. We concluded that the SE-ResNeXt-SA-18 could help track or monitor the progress of ILD through accuracy pattern classification.
Collapse
Affiliation(s)
- Jian-Xun Chen
- Department of Thoracic Surgery, China Medical University Hospital, Taichung, Taiwan
| | - Yu-Cheng Shen
- Department of Thoracic Surgery, China Medical University Hospital, Taichung, Taiwan
| | - Shin-Lei Peng
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan
| | - Yi-Wen Chen
- x-Dimension Center for Medical Research and Translation, China Medical University Hospital, Taichung, Taiwan
- Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan
- Graduate Institute of Biomedical Sciences, China Medical University, Taichung, Taiwan
| | - Hsin-Yuan Fang
- x-Dimension Center for Medical Research and Translation, China Medical University Hospital, Taichung, Taiwan
- School of Medicine, China Medical University, Taichung, Taiwan
| | - Joung-Liang Lan
- School of Medicine, China Medical University, Taichung, Taiwan
- Rheumatology and Immunology Center, China Medical University Hospital, Taichung, Taiwan
| | - Cheng-Ting Shih
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, Taiwan.
- x-Dimension Center for Medical Research and Translation, China Medical University Hospital, Taichung, Taiwan.
| |
Collapse
|
22
|
Umemoto M, Mariya T, Nambu Y, Nagata M, Horimai T, Sugita S, Kanaseki T, Takenaka Y, Shinkai S, Matsuura M, Iwasaki M, Hirohashi Y, Hasegawa T, Torigoe T, Fujino Y, Saito T. Prediction of Mismatch Repair Status in Endometrial Cancer from Histological Slide Images Using Various Deep Learning-Based Algorithms. Cancers (Basel) 2024; 16:1810. [PMID: 38791889 PMCID: PMC11119770 DOI: 10.3390/cancers16101810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2024] [Revised: 04/22/2024] [Accepted: 05/08/2024] [Indexed: 05/26/2024] Open
Abstract
The application of deep learning algorithms to predict the molecular profiles of various cancers from digital images of hematoxylin and eosin (H&E)-stained slides has been reported in recent years, mainly for gastric and colon cancers. In this study, we investigated the potential use of H&E-stained endometrial cancer slide images to predict the associated mismatch repair (MMR) status. H&E-stained slide images were collected from 127 cases of the primary lesion of endometrial cancer. After digitization using a Nanozoomer virtual slide scanner (Hamamatsu Photonics), we segmented the scanned images into 5397 tiles of 512 × 512 pixels. The MMR proteins (PMS2, MSH6) were immunohistochemically stained, classified into MMR proficient/deficient, and annotated for each case and tile. We trained several neural networks, including convolutional and attention-based networks, using tiles annotated with the MMR status. Among the tested networks, ResNet50 exhibited the highest area under the receiver operating characteristic curve (AUROC) of 0.91 for predicting the MMR status. The constructed prediction algorithm may be applicable to other molecular profiles and useful for pre-screening before implementing other, more costly genetic profiling tests.
Collapse
Affiliation(s)
- Mina Umemoto
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Tasuku Mariya
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Yuta Nambu
- Department of Media Architecture, Future University Hakodate, Hakodate 041-8655, Japan; (Y.N.); (M.N.); (Y.F.)
| | - Mai Nagata
- Department of Media Architecture, Future University Hakodate, Hakodate 041-8655, Japan; (Y.N.); (M.N.); (Y.F.)
| | | | - Shintaro Sugita
- Department of Surgical Pathology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (S.S.); (T.H.)
| | - Takayuki Kanaseki
- Department of Pathology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (T.K.); (Y.H.); (T.T.)
| | - Yuka Takenaka
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Shota Shinkai
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Motoki Matsuura
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Masahiro Iwasaki
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| | - Yoshihiko Hirohashi
- Department of Pathology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (T.K.); (Y.H.); (T.T.)
| | - Tadashi Hasegawa
- Department of Surgical Pathology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (S.S.); (T.H.)
| | - Toshihiko Torigoe
- Department of Pathology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (T.K.); (Y.H.); (T.T.)
| | - Yuichi Fujino
- Department of Media Architecture, Future University Hakodate, Hakodate 041-8655, Japan; (Y.N.); (M.N.); (Y.F.)
| | - Tsuyoshi Saito
- Department of Obstetrics and Gynecology, Sapporo Medical University of Medicine, Sapporo 060-8556, Japan; (M.U.); (Y.T.); (S.S.); (M.M.); (M.I.); (T.S.)
| |
Collapse
|
23
|
Zhang T, Wei D, Zhu M, Gu S, Zheng Y. Self-supervised learning for medical image data with anatomy-oriented imaging planes. Med Image Anal 2024; 94:103151. [PMID: 38527405 DOI: 10.1016/j.media.2024.103151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 12/29/2023] [Accepted: 03/20/2024] [Indexed: 03/27/2024]
Abstract
Self-supervised learning has emerged as a powerful tool for pretraining deep networks on unlabeled data, prior to transfer learning of target tasks with limited annotation. The relevance between the pretraining pretext and target tasks is crucial to the success of transfer learning. Various pretext tasks have been proposed to utilize properties of medical image data (e.g., three dimensionality), which are more relevant to medical image analysis than generic ones for natural images. However, previous work rarely paid attention to data with anatomy-oriented imaging planes, e.g., standard cardiac magnetic resonance imaging views. As these imaging planes are defined according to the anatomy of the imaged organ, pretext tasks effectively exploiting this information can pretrain the networks to gain knowledge on the organ of interest. In this work, we propose two complementary pretext tasks for this group of medical image data based on the spatial relationship of the imaging planes. The first is to learn the relative orientation between the imaging planes and implemented as regressing their intersecting lines. The second exploits parallel imaging planes to regress their relative slice locations within a stack. Both pretext tasks are conceptually straightforward and easy to implement, and can be combined in multitask learning for better representation learning. Thorough experiments on two anatomical structures (heart and knee) and representative target tasks (semantic segmentation and classification) demonstrate that the proposed pretext tasks are effective in pretraining deep networks for remarkably boosted performance on the target tasks, and superior to other recent approaches.
Collapse
Affiliation(s)
- Tianwei Zhang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Dong Wei
- Jarvis Research Center, Tencent YouTu Lab, Shenzhen 518057, China
| | - Mengmeng Zhu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Shi Gu
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Yefeng Zheng
- Jarvis Research Center, Tencent YouTu Lab, Shenzhen 518057, China
| |
Collapse
|
24
|
Chang C, Shi W, Wang Y, Zhang Z, Huang X, Jiao Y. The path from task-specific to general purpose artificial intelligence for medical diagnostics: A bibliometric analysis. Comput Biol Med 2024; 172:108258. [PMID: 38467093 DOI: 10.1016/j.compbiomed.2024.108258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/08/2024] [Accepted: 03/06/2024] [Indexed: 03/13/2024]
Abstract
Artificial intelligence (AI) has revolutionized many fields, and its potential in healthcare has been increasingly recognized. Based on diverse data sources such as imaging, laboratory tests, medical records, and electrophysiological data, diagnostic AI has witnessed rapid development in recent years. A comprehensive understanding of the development status, contributing factors, and their relationships in the application of AI to medical diagnostics is essential to further promote its use in clinical practice. In this study, we conducted a bibliometric analysis to explore the evolution of task-specific to general-purpose AI for medical diagnostics. We used the Web of Science database to search for relevant articles published between 2010 and 2023, and applied VOSviewer, the R package Bibliometrix, and CiteSpace to analyze collaborative networks and keywords. Our analysis revealed that the field of AI in medical diagnostics has experienced rapid growth in recent years, with a focus on tasks such as image analysis, disease prediction, and decision support. Collaborative networks were observed among researchers and institutions, indicating a trend of global cooperation in this field. Additionally, we identified several key factors contributing to the development of AI in medical diagnostics, including data quality, algorithm design, and computational power. Challenges to progress in the field include model explainability, robustness, and equality, which will require multi-stakeholder, interdisciplinary collaboration to tackle. Our study provides a holistic understanding of the path from task-specific, mono-modal AI toward general-purpose, multimodal AI for medical diagnostics. With the continuous improvement of AI technology and the accumulation of medical data, we believe that AI will play a greater role in medical diagnostics in the future.
Collapse
Affiliation(s)
- Chuheng Chang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China; 4+4 Medical Doctor Program, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Wen Shi
- Department of Gastroenterology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Youyang Wang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Zhan Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing, China.
| | - Xiaoming Huang
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| | - Yang Jiao
- Department of General Practice (General Internal Medicine), Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
| |
Collapse
|
25
|
Nam HK, Lea WWI, Yang Z, Noh E, Rhie YJ, Lee KH, Hong SJ. Clinical validation of a deep-learning-based bone age software in healthy Korean children. Ann Pediatr Endocrinol Metab 2024; 29:102-108. [PMID: 38271993 PMCID: PMC11076234 DOI: 10.6065/apem.2346050.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 04/19/2023] [Accepted: 04/28/2023] [Indexed: 01/27/2024] Open
Abstract
PURPOSE Bone age (BA) is needed to assess developmental status and growth disorders. We evaluated the clinical performance of a deep-learning-based BA software to estimate the chronological age (CA) of healthy Korean children. METHODS This retrospective study included 371 healthy children (217 boys, 154 girls), aged between 4 and 17 years, who visited the Department of Pediatrics for health check-ups between January 2017 and December 2018. A total of 553 left-hand radiographs from 371 healthy Korean children were evaluated using a commercial deep-learning-based BA software (BoneAge, Vuno, Seoul, Korea). The clinical performance of the deep learning (DL) software was determined using the concordance rate and Bland-Altman analysis via comparison with the CA. RESULTS A 2-sample t-test (P<0.001) and Fisher exact test (P=0.011) showed a significant difference between the normal CA and the BA estimated by the DL software. There was good correlation between the 2 variables (r=0.96, P<0.001); however, the root mean square error was 15.4 months. With a 12-month cutoff, the concordance rate was 58.8%. The Bland-Altman plot showed that the DL software tended to underestimate the BA compared with the CA, especially in children under the age of 8.3 years. CONCLUSION The DL-based BA software showed a low concordance rate and a tendency to underestimate the BA in healthy Korean children.
Collapse
Affiliation(s)
- Hyo-Kyoung Nam
- Department of Pediatrics, Korea University College of Medicine, Seoul, Korea
| | - Winnah Wu-In Lea
- Department of Radiology, Korea University College of Medicine, Seoul, Korea
| | - Zepa Yang
- Smart Health Care Center, Korea University Guro Hospital, Seoul, Korea
- Korea University Guro Hospital-Medical Image Data Center (KUGH-MIDC), Seoul, Korea
| | - Eunjin Noh
- Smart Health Care Center, Korea University Guro Hospital, Seoul, Korea
| | - Young-Jun Rhie
- Department of Pediatrics, Korea University College of Medicine, Seoul, Korea
| | - Kee-Hyoung Lee
- Department of Pediatrics, Korea University College of Medicine, Seoul, Korea
| | - Suk-Joo Hong
- Department of Radiology, Korea University College of Medicine, Seoul, Korea
- Korea University Guro Hospital-Medical Image Data Center (KUGH-MIDC), Seoul, Korea
| |
Collapse
|
26
|
Zhang K, Liang W, Cao P, Liu X, Yang J, Zaiane O. Label correlation guided discriminative label feature learning for multi-label chest image classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 245:108032. [PMID: 38244339 DOI: 10.1016/j.cmpb.2024.108032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 01/02/2024] [Accepted: 01/12/2024] [Indexed: 01/22/2024]
Abstract
BACKGROUND AND OBJECTIVE Multi-label Chest X-ray (CXR) images often contain rich label relationship information, which is beneficial to improve classification performance. However, because of the intricate relationships among labels, most existing works fail to effectively learn and make full use of the label correlations, resulting in limited classification performance. In this study, we propose a multi-label learning framework that learns and leverages the label correlations to improve multi-label CXR image classification. METHODS In this paper, we capture the global label correlations through the self-attention mechanism. Meanwhile, to better utilize label correlations for guiding feature learning, we decompose the image-level features into label-level features. Furthermore, we enhance label-level feature learning in an end-to-end manner by a consistency constraint between global and local label correlations, and a label correlation guided multi-label supervised contrastive loss. RESULTS To demonstrate the superior performance of our proposed approach, we conduct three times 5-fold cross-validation experiments on the CheXpert dataset. Our approach obtains an average F1 score of 44.6% and an AUC of 76.5%, achieving a 7.7% and 1.3% improvement compared to the state-of-the-art results. CONCLUSION More accurate label correlations and full utilization of the learned label correlations help learn more discriminative label-level features. Experimental results demonstrate that our approach achieves exceptionally competitive performance compared to the state-of-the-art algorithms.
Collapse
Affiliation(s)
- Kai Zhang
- Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Wei Liang
- Computer Science and Engineering, Northeastern University, Shenyang, China
| | - Peng Cao
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China.
| | - Xiaoli Liu
- DAMO Academy, Alibaba Group, Hangzhou, China
| | - Jinzhu Yang
- Computer Science and Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image of Ministry of Education, Northeastern University, Shenyang, China; National Frontiers Science Center for Industrial Intelligence and Systems Optimization, Shenyang, China
| | - Osmar Zaiane
- Alberta Machine Intelligence Institute, University of Alberta, Edmonton, Alberta, Canada
| |
Collapse
|
27
|
Zhang X, Li Q, Li W, Guo Y, Zhang J, Guo C, Chang K, Lovell NH. FD-Net: Feature Distillation Network for Oral Squamous Cell Carcinoma Lymph Node Segmentation in Hyperspectral Imagery. IEEE J Biomed Health Inform 2024; 28:1552-1563. [PMID: 38446656 DOI: 10.1109/jbhi.2024.3350245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/08/2024]
Abstract
Oral squamous cell carcinoma (OSCC) has the characteristics of early regional lymph node metastasis. OSCC patients often have poor prognoses and low survival rates due to cervical lymph metastases. Therefore, it is necessary to rely on a reasonable screening method to quickly judge the cervical lymph metastastic condition of OSCC patients and develop appropriate treatment plans. In this study, the widely used pathological sections with hematoxylin-eosin (H&E) staining are taken as the target, and combined with the advantages of hyperspectral imaging technology, a novel diagnostic method for identifying OSCC lymph node metastases is proposed. The method consists of a learning stage and a decision-making stage, focusing on cancer and non-cancer nuclei, gradually completing the lesions' segmentation from coarse to fine, and achieving high accuracy. In the learning stage, the proposed feature distillation-Net (FD-Net) network is developed to segment the cancerous and non-cancerous nuclei. In the decision-making stage, the segmentation results are post-processed, and the lesions are effectively distinguished based on the prior. Experimental results demonstrate that the proposed FD-Net is very competitive in the OSCC hyperspectral medical image segmentation task. The proposed FD-Net method performs best on the seven segmentation evaluation indicators: MIoU, OA, AA, SE, CSI, GDR, and DICE. Among these seven evaluation indicators, the proposed FD-Net method is 1.75%, 1.27%, 0.35%, 1.9%, 0.88%, 4.45%, and 1.98% higher than the DeepLab V3 method, which ranks second in performance, respectively. In addition, the proposed diagnosis method of OSCC lymph node metastasis can effectively assist pathologists in disease screening and reduce the workload of pathologists.
Collapse
|
28
|
Xu J, Wang Z. Efficient and accurate microplastics identification and segmentation in urban waters using convolutional neural networks. THE SCIENCE OF THE TOTAL ENVIRONMENT 2024; 911:168696. [PMID: 38000753 DOI: 10.1016/j.scitotenv.2023.168696] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/15/2023] [Accepted: 11/17/2023] [Indexed: 11/26/2023]
Abstract
Microplastics (MPs), measuring less than 5 mm, pose threats to ecological security and human health in urban waters. Additionally, they act as carriers, transporting pollutants from terrestrial systems into oceanic circulation, contributing to global pollution. Recognizing the significance of identifying MPs in urban waters, one potential solution to the time-consuming and labor-intensive manual identification process is the application of a convolutional neural network (CNN). Therefore, having a reliable CNN model that efficiently and accurately identifies MPs is essential for extensive research on MPs pollution in urban waters. In this work, an MPs dataset with complex background was acquired from urban waters in southern China. The dataset was used to train and validate CNN models, including UNet, UNet2plus, and UNet3plus. Subsequently, the computational and inference performance of the three models was evaluated using a newly collected MPs dataset. The results showed that UNet, UNet2plus, UNet3plus, after being trained for 120 epochs, provided efficient inferences within less than 1 s, 2 s, and 3 s for 100 MPs images, respectively. Accurate segmentation with mIoU of 91.45 ± 5.93 % and 91.08 ± 6.18 % was achieved using UNet and UNet2plus, respectively, while UNet3plus exhibited a lower performance with only 82.21 ± 10.33 % mIoU. This work demonstrated that UNet and UNet2plus deliver efficient and accurate identification of MPs in urban waters. Developing CNN models that efficiently and accurately identify MPs is crucial for reducing manual time, especially in large-scale investigations of MPs pollution in urban waters.
Collapse
Affiliation(s)
- Jiongji Xu
- School of Civil Engineering and Transportation, State Key Laboratory of Subtropical Building and Urban Science, South China University of Technology, Guangzhou 510641, China; Pazhou Lab, Guangzhou 510335, China
| | - Zhaoli Wang
- School of Civil Engineering and Transportation, State Key Laboratory of Subtropical Building and Urban Science, South China University of Technology, Guangzhou 510641, China; Pazhou Lab, Guangzhou 510335, China.
| |
Collapse
|
29
|
Sun H, Liu M, Liu A, Deng M, Yang X, Kang H, Zhao L, Ren Y, Xie B, Zhang R, Dai H. Developing the Lung Graph-Based Machine Learning Model for Identification of Fibrotic Interstitial Lung Diseases. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:268-279. [PMID: 38343257 PMCID: PMC10976920 DOI: 10.1007/s10278-023-00909-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 10/06/2023] [Accepted: 10/09/2023] [Indexed: 03/02/2024]
Abstract
Accurate detection of fibrotic interstitial lung disease (f-ILD) is conducive to early intervention. Our aim was to develop a lung graph-based machine learning model to identify f-ILD. A total of 417 HRCTs from 279 patients with confirmed ILD (156 f-ILD and 123 non-f-ILD) were included in this study. A lung graph-based machine learning model based on HRCT was developed for aiding clinician to diagnose f-ILD. In this approach, local radiomics features were extracted from an automatically generated geometric atlas of the lung and used to build a series of specific lung graph models. Encoding these lung graphs, a lung descriptor was gained and became as a characterization of global radiomics feature distribution to diagnose f-ILD. The Weighted Ensemble model showed the best predictive performance in cross-validation. The classification accuracy of the model was significantly higher than that of the three radiologists at both the CT sequence level and the patient level. At the patient level, the diagnostic accuracy of the model versus radiologists A, B, and C was 0.986 (95% CI 0.959 to 1.000), 0.918 (95% CI 0.849 to 0.973), 0.822 (95% CI 0.726 to 0.904), and 0.904 (95% CI 0.836 to 0.973), respectively. There was a statistically significant difference in AUC values between the model and 3 physicians (p < 0.05). The lung graph-based machine learning model could identify f-ILD, and the diagnostic performance exceeded radiologists which could aid clinicians to assess ILD objectively.
Collapse
Affiliation(s)
- Haishuang Sun
- National Center for Respiratory Medicine, State Key Laboratory of Respiratory Health and Multimorbidity; National Clinical Research Center for Respiratory Diseases;Institute of Respiratory Medicine, Chinese Academy of Medical Sciences; Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
- Department of Medical Oncology, State Key Laboratory of Oncology in South China, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, Guangdong Province, 510060, China
| | - Min Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, 100029, China.
- Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Anqi Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, 100029, China
- Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China
| | - Mei Deng
- Department of Radiology, China-Japan Friendship Hospital, Beijing, 100029, China
- Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China
| | - Xiaoyan Yang
- National Center for Respiratory Medicine, State Key Laboratory of Respiratory Health and Multimorbidity; National Clinical Research Center for Respiratory Diseases;Institute of Respiratory Medicine, Chinese Academy of Medical Sciences; Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
| | - Han Kang
- Institute of Advanced Research, Infervision Medical Technology Co., Ltd., Beijing, 100025, China
| | - Ling Zhao
- Department of Clinical Pathology, China-Japan Friendship Hospital, Beijing, 100029, China
| | - Yanhong Ren
- National Center for Respiratory Medicine, State Key Laboratory of Respiratory Health and Multimorbidity; National Clinical Research Center for Respiratory Diseases;Institute of Respiratory Medicine, Chinese Academy of Medical Sciences; Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
| | - Bingbing Xie
- National Center for Respiratory Medicine, State Key Laboratory of Respiratory Health and Multimorbidity; National Clinical Research Center for Respiratory Diseases;Institute of Respiratory Medicine, Chinese Academy of Medical Sciences; Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China
| | | | - Huaping Dai
- National Center for Respiratory Medicine, State Key Laboratory of Respiratory Health and Multimorbidity; National Clinical Research Center for Respiratory Diseases;Institute of Respiratory Medicine, Chinese Academy of Medical Sciences; Department of Pulmonary and Critical Care Medicine, China-Japan Friendship Hospital, Beijing, 100029, China.
- Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| |
Collapse
|
30
|
Walsh SLF, De Backer J, Prosch H, Langs G, Calandriello L, Cottin V, Brown KK, Inoue Y, Tzilas V, Estes E. Towards the adoption of quantitative computed tomography in the management of interstitial lung disease. Eur Respir Rev 2024; 33:230055. [PMID: 38537949 PMCID: PMC10966471 DOI: 10.1183/16000617.0055-2023] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Accepted: 01/31/2024] [Indexed: 03/29/2025] Open
Abstract
The shortcomings of qualitative visual assessment have led to the development of computer-based tools to characterise and quantify disease on high-resolution computed tomography (HRCT) in patients with interstitial lung diseases (ILDs). Quantitative CT (QCT) software enables quantification of patterns on HRCT with results that are objective, reproducible, sensitive to change and predictive of disease progression. Applications developed to provide a diagnosis or pattern classification are mainly based on artificial intelligence. Deep learning, which identifies patterns in high-dimensional data and maps them to segmentations or outcomes, can be used to identify the imaging patterns that most accurately predict disease progression. Optimisation of QCT software will require the implementation of protocol standards to generate data of sufficient quality for use in computerised applications and the identification of diagnostic, imaging and physiological features that are robustly associated with mortality for use as anchors in the development of algorithms. Consortia such as the Open Source Imaging Consortium have a key role to play in the collation of imaging and clinical data that can be used to identify digital imaging biomarkers that inform diagnosis, prognosis and response to therapy.
Collapse
Affiliation(s)
- Simon L F Walsh
- National Heart and Lung Institute, Imperial College, London, UK
| | | | | | - Georg Langs
- Medical University of Vienna, Vienna, Austria
- contextflow GmbH, Vienna, Austria
| | | | - Vincent Cottin
- National Reference Center for Rare Pulmonary Diseases, Louis Pradel Hospital, Hospices Civils de Lyon, Claude Bernard University Lyon 1, UMR 754, Lyon, France
| | - Kevin K Brown
- Department of Medicine, National Jewish Health, Denver, CO, USA
| | - Yoshikazu Inoue
- Clinical Research Center, National Hospital Organization Kinki-Chuo Chest Medical Center, Sakai City, Japan
| | - Vasilios Tzilas
- 5th Respiratory Department, Chest Diseases Hospital Sotiria, Athens, Greece
| | | |
Collapse
|
31
|
Ozcelik N, Kıvrak M, Kotan A, Selimoğlu İ. Lung cancer detection based on computed tomography image using convolutional neural networks. Technol Health Care 2024; 32:1795-1805. [PMID: 37955065 DOI: 10.3233/thc-230810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2023]
Abstract
BACKGROUND Lung cancer is the most common type of cancer, accounting for 12.8% of cancer cases worldwide. As initially non-specific symptoms occur, it is difficult to diagnose in the early stages. OBJECTIVE Image processing techniques developed using machine learning methods have played a crucial role in the development of decision support systems. This study aimed to classify benign and malignant lung lesions with a deep learning approach and convolutional neural networks (CNNs). METHODS The image dataset includes 4459 Computed tomography (CT) scans (benign, 2242; malignant, 2217). The research type was retrospective; the case-control analysis. A method based on GoogLeNet architecture, which is one of the deep learning approaches, was used to make maximum inference on images and minimize manual control. RESULTS The dataset used to develop the CNNs model is included in the training (3567) and testing (892) datasets. The model's highest accuracy rate in the training phase was estimated as 0.98. According to accuracy, sensitivity, specificity, positive predictive value, and negative predictive values of testing data, the highest classification performance ratio was positive predictive value with 0.984. CONCLUSION The deep learning methods are beneficial in the diagnosis and classification of lung cancer through computed tomography images.
Collapse
Affiliation(s)
| | - Mehmet Kıvrak
- Recep Tayyip Erdogan University, Biostatistics and Medical Informatics, Rize, Turkey
| | - Abdurrahman Kotan
- Erzurum Regional Training and Research Hospital, Chest Disease, Erzurum, Turkey
| | - İnci Selimoğlu
- Recep Tayyip Erdogan University, Chest Disease, Rize, Turkey
| |
Collapse
|
32
|
Lai Y, Liu X, Hou F, Han Z, E L, Su N, Du D, Wang Z, Zheng W, Wu Y. Severity-stratification of interstitial lung disease by deep learning enabled assessment and quantification of lesion indicators from HRCT images. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:323-338. [PMID: 38306087 DOI: 10.3233/xst-230218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/03/2024]
Abstract
BACKGROUND Interstitial lung disease (ILD) represents a group of chronic heterogeneous diseases, and current clinical practice in assessment of ILD severity and progression mainly rely on the radiologist-based visual screening, which greatly restricts the accuracy of disease assessment due to the high inter- and intra-subjective observer variability. OBJECTIVE To solve these problems, in this work, we propose a deep learning driven framework that can assess and quantify lesion indicators and outcome the prediction of severity of ILD. METHODS In detail, we first present a convolutional neural network that can segment and quantify five types of lesions including HC, RO, GGO, CONS, and EMPH from HRCT of ILD patients, and then we conduct quantitative analysis to select the features related to ILD based on the segmented lesions and clinical data. Finally, a multivariate prediction model based on nomogram to predict the severity of ILD is established by combining multiple typical lesions. RESULTS Experimental results showed that three lesions of HC, RO, and GGO could accurately predict ILD staging independently or combined with other HRCT features. Based on the HRCT, the used multivariate model can achieve the highest AUC value of 0.755 for HC, and the lowest AUC value of 0.701 for RO in stage I, and obtain the highest AUC value of 0.803 for HC, and the lowest AUC value of 0.733 for RO in stage II. Additionally, our ILD scoring model could achieve an average accuracy of 0.812 (0.736 - 0.888) in predicting the severity of ILD via cross-validation. CONCLUSIONS In summary, our proposed method provides effective segmentation of ILD lesions by a comprehensive deep-learning approach and confirms its potential effectiveness in improving diagnostic accuracy for clinicians.
Collapse
Affiliation(s)
- Yexin Lai
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Xueyu Liu
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Fan Hou
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Zhiyong Han
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Linning E
- Department of Radiology, People's Hospital of Longhua, Shenzhen, China
| | - Ningling Su
- Department of Radiology, Shanxi Bethune Hospital, Taiyuan, Shanxi, China
| | - Dianrong Du
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Zhichong Wang
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Wen Zheng
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| | - Yongfei Wu
- College of Data Science, Taiyuan University of Technology, Taiyuan, Shanxi, China
| |
Collapse
|
33
|
Opoku M, Weyori BA, Adekoya AF, Adu K. CLAHE-CapsNet: Efficient retina optical coherence tomography classification using capsule networks with contrast limited adaptive histogram equalization. PLoS One 2023; 18:e0288663. [PMID: 38032915 PMCID: PMC10688733 DOI: 10.1371/journal.pone.0288663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 07/01/2023] [Indexed: 12/02/2023] Open
Abstract
Manual detection of eye diseases using retina Optical Coherence Tomography (OCT) images by Ophthalmologists is time consuming, prone to errors and tedious. Previous researchers have developed a computer aided system using deep learning-based convolutional neural networks (CNNs) to aid in faster detection of the retina diseases. However, these methods find it difficult to achieve better classification performance due to noise in the OCT image. Moreover, the pooling operations in CNN reduce resolution of the image that limits the performance of the model. The contributions of the paper are in two folds. Firstly, this paper makes a comprehensive literature review to establish current-state-of-act methods successfully implemented in retina OCT image classifications. Additionally, this paper proposes a capsule network coupled with contrast limited adaptive histogram equalization (CLAHE-CapsNet) for retina OCT image classification. The CLAHE was implemented as layers to minimize the noise in the retina image for better performance of the model. A three-layer convolutional capsule network was designed with carefully chosen hyperparameters. The dataset used for this study was presented by University of California San Diego (UCSD). The dataset consists of 84,495 X-Ray images (JPEG) and 4 categories (NORMAL, CNV, DME, and DRUSEN). The images went through a grading system consisting of multiple layers of trained graders of expertise for verification and correction of image labels. Evaluation experiments were conducted and comparison of results was done with state-of-the-art models to find out the best performing model. The evaluation metrics; accuracy, sensitivity, precision, specificity, and AUC are used to determine the performance of the models. The evaluation results show that the proposed model achieves the best performing model of accuracies of 97.7%, 99.5%, and 99.3% on overall accuracy (OA), overall sensitivity (OS), and overall precision (OP), respectively. The results obtained indicate that the proposed model can be adopted and implemented to help ophthalmologists in detecting retina OCT diseases.
Collapse
Affiliation(s)
- Michael Opoku
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Benjamin Asubam Weyori
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Adebayo Felix Adekoya
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| | - Kwabena Adu
- Department of Computer Science and Informatics, University of Energy and Natural Resource, Sunyani, Ghana
| |
Collapse
|
34
|
Chen B, Jin J, Liu H, Yang Z, Zhu H, Wang Y, Lin J, Wang S, Chen S. Trends and hotspots in research on medical images with deep learning: a bibliometric analysis from 2013 to 2023. Front Artif Intell 2023; 6:1289669. [PMID: 38028662 PMCID: PMC10665961 DOI: 10.3389/frai.2023.1289669] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 10/27/2023] [Indexed: 12/01/2023] Open
Abstract
Background With the rapid development of the internet, the improvement of computer capabilities, and the continuous advancement of algorithms, deep learning has developed rapidly in recent years and has been widely applied in many fields. Previous studies have shown that deep learning has an excellent performance in image processing, and deep learning-based medical image processing may help solve the difficulties faced by traditional medical image processing. This technology has attracted the attention of many scholars in the fields of computer science and medicine. This study mainly summarizes the knowledge structure of deep learning-based medical image processing research through bibliometric analysis and explores the research hotspots and possible development trends in this field. Methods Retrieve the Web of Science Core Collection database using the search terms "deep learning," "medical image processing," and their synonyms. Use CiteSpace for visual analysis of authors, institutions, countries, keywords, co-cited references, co-cited authors, and co-cited journals. Results The analysis was conducted on 562 highly cited papers retrieved from the database. The trend chart of the annual publication volume shows an upward trend. Pheng-Ann Heng, Hao Chen, and Klaus Hermann Maier-Hein are among the active authors in this field. Chinese Academy of Sciences has the highest number of publications, while the institution with the highest centrality is Stanford University. The United States has the highest number of publications, followed by China. The most frequent keyword is "Deep Learning," and the highest centrality keyword is "Algorithm." The most cited author is Kaiming He, and the author with the highest centrality is Yoshua Bengio. Conclusion The application of deep learning in medical image processing is becoming increasingly common, and there are many active authors, institutions, and countries in this field. Current research in medical image processing mainly focuses on deep learning, convolutional neural networks, classification, diagnosis, segmentation, image, algorithm, and artificial intelligence. The research focus and trends are gradually shifting toward more complex and systematic directions, and deep learning technology will continue to play an important role.
Collapse
Affiliation(s)
- Borui Chen
- First School of Clinical Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Jing Jin
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Haichao Liu
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Zhengyu Yang
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Haoming Zhu
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Yu Wang
- First School of Clinical Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| | - Jianping Lin
- The School of Health, Fujian Medical University, Fuzhou, China
| | - Shizhong Wang
- The School of Health, Fujian Medical University, Fuzhou, China
| | - Shaoqing Chen
- College of Rehabilitation Medicine, Fujian University of Traditional Chinese Medicine, Fuzhou, China
| |
Collapse
|
35
|
Waseem Sabir M, Farhan M, Almalki NS, Alnfiai MM, Sampedro GA. FibroVit-Vision transformer-based framework for detection and classification of pulmonary fibrosis from chest CT images. Front Med (Lausanne) 2023; 10:1282200. [PMID: 38020169 PMCID: PMC10666764 DOI: 10.3389/fmed.2023.1282200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2023] [Accepted: 10/23/2023] [Indexed: 12/01/2023] Open
Abstract
Pulmonary Fibrosis (PF) is an immedicable respiratory condition distinguished by permanent fibrotic alterations in the pulmonary tissue for which there is no cure. Hence, it is crucial to diagnose PF swiftly and precisely. The existing research on deep learning-based pulmonary fibrosis detection methods has limitations, including dataset sample sizes and a lack of standardization in data preprocessing and evaluation metrics. This study presents a comparative analysis of four vision transformers regarding their efficacy in accurately detecting and classifying patients with Pulmonary Fibrosis and their ability to localize abnormalities within Images obtained from Computerized Tomography (CT) scans. The dataset consisted of 13,486 samples selected out of 24647 from the Pulmonary Fibrosis dataset, which included both PF-positive CT and normal images that underwent preprocessing. The preprocessed images were divided into three sets: the training set, which accounted for 80% of the total pictures; the validation set, which comprised 10%; and the test set, which also consisted of 10%. The vision transformer models, including ViT, MobileViT2, ViTMSN, and BEiT were subjected to training and validation procedures, during which hyperparameters like the learning rate and batch size were fine-tuned. The overall performance of the optimized architectures has been assessed using various performance metrics to showcase the consistent performance of the fine-tuned model. Regarding performance, ViT has shown superior performance in validation and testing accuracy and loss minimization, specifically for CT images when trained at a single epoch with a tuned learning rate of 0.0001. The results were as follows: validation accuracy of 99.85%, testing accuracy of 100%, training loss of 0.0075, and validation loss of 0.0047. The experimental evaluation of the independently collected data gives empirical evidence that the optimized Vision Transformer (ViT) architecture exhibited superior performance compared to all other optimized architectures. It achieved a flawless score of 1.0 in various standard performance metrics, including Sensitivity, Specificity, Accuracy, F1-score, Precision, Recall, Mathew Correlation Coefficient (MCC), Precision-Recall Area under the Curve (AUC PR), Receiver Operating Characteristic and Area Under the Curve (ROC-AUC). Therefore, the optimized Vision Transformer (ViT) functions as a reliable diagnostic tool for the automated categorization of individuals with pulmonary fibrosis (PF) using chest computed tomography (CT) scans.
Collapse
Affiliation(s)
| | - Muhammad Farhan
- Department of Computer Science, COMSATS University Islamabad, Sahiwal, Pakistan
| | - Nabil Sharaf Almalki
- Department of Special Education, College of Education, King Saud University, Riyadh, Saudi Arabia
| | - Mrim M. Alnfiai
- Department of Information Technology, College of Computers and Information Technology, Taif University, Taif, Saudi Arabia
| | - Gabriel Avelino Sampedro
- Faculty of Information and Communication Studies, University of the Philippines Open University, Los Baños, Philippines
- Center for Computational Imaging and Visual Innovations, De La Salle University, Manila, Philippines
| |
Collapse
|
36
|
Vijh S, Gaurav P, Pandey HM. Hybrid bio-inspired algorithm and convolutional neural network for automatic lung tumor detection. Neural Comput Appl 2023; 35:23711-23724. [DOI: 10.1007/s00521-020-05362-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 09/09/2020] [Indexed: 12/20/2022]
Abstract
AbstractIn this paper, we have proposed a hybrid bio-inspired algorithm which takes the merits of whale optimization algorithm (WOA) and adaptive particle swarm optimization (APSO). The proposed algorithm is referred as the hybrid WOA_APSO algorithm. We utilize a convolutional neural network (CNN) for classification purposes. Extensive experiments are performed to evaluate the performance of the proposed model. Here, pre-processing and segmentation are performed on 120 lung CT images for obtaining the segmented tumored and non-tumored region nodule. The statistical, texture, geometrical and structural features are extracted from the processed image using different techniques. The optimized feature selection plays a crucial role in determining the accuracy of the classification algorithm. The novel variant of whale optimization algorithm and adaptive particle swarm optimization, hybrid bio-inspired WOA_APSO, is proposed for selecting optimized features. The feature selection grouping is applied by embedding linear discriminant analysis which helps in determining the reduced dimensions of subsets. Twofold performance comparisons are done. First, we compare the performance against the different classification techniques such as support vector machine, artificial neural network (ANN) and CNN. Second, the computational cost of the hybrid WOA_APSO is compared with the standard WOA and APSO algorithms. The experimental result reveals that the proposed algorithm is capable of automatic lung tumor detection and it outperforms the other state-of-the-art methods on standard quality measures such as accuracy (97.18%), sensitivity (97%) and specificity (98.66%). The results reported in this paper are encouraging; hence, these results will motivate other researchers to explore more in this direction.
Collapse
|
37
|
Yao H, Tian L, Liu X, Li S, Chen Y, Cao J, Zhang Z, Chen Z, Feng Z, Xu Q, Zhu J, Wang Y, Guo Y, Chen W, Li C, Li P, Wang H, Luo J. Development and external validation of the multichannel deep learning model based on unenhanced CT for differentiating fat-poor angiomyolipoma from renal cell carcinoma: a two-center retrospective study. J Cancer Res Clin Oncol 2023; 149:15827-15838. [PMID: 37672075 PMCID: PMC10620299 DOI: 10.1007/s00432-023-05339-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 08/24/2023] [Indexed: 09/07/2023]
Abstract
PURPOSE There are undetectable levels of fat in fat-poor angiomyolipoma. Thus, it is often misdiagnosed as renal cell carcinoma. We aimed to develop and evaluate a multichannel deep learning model for differentiating fat-poor angiomyolipoma (fp-AML) from renal cell carcinoma (RCC). METHODS This two-center retrospective study included 320 patients from the First Affiliated Hospital of Sun Yat-Sen University (FAHSYSU) and 132 patients from the Sun Yat-Sen University Cancer Center (SYSUCC). Data from patients at FAHSYSU were divided into a development dataset (n = 267) and a hold-out dataset (n = 53). The development dataset was used to obtain the optimal combination of CT modality and input channel. The hold-out dataset and SYSUCC dataset were used for independent internal and external validation, respectively. RESULTS In the development phase, models trained on unenhanced CT images performed significantly better than those trained on enhanced CT images based on the fivefold cross-validation. The best patient-level performance, with an average area under the receiver operating characteristic curve (AUC) of 0.951 ± 0.026 (mean ± SD), was achieved using the "unenhanced CT and 7-channel" model, which was finally selected as the optimal model. In the independent internal and external validation, AUCs of 0.966 (95% CI 0.919-1.000) and 0.898 (95% CI 0.824-0.972), respectively, were obtained using the optimal model. In addition, the performance of this model was better on large tumors (≥ 40 mm) in both internal and external validation. CONCLUSION The promising results suggest that our multichannel deep learning classifier based on unenhanced whole-tumor CT images is a highly useful tool for differentiating fp-AML from RCC.
Collapse
Affiliation(s)
- Haohua Yao
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
- Department of Urology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Li Tian
- Department of Medical Imaging, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Xi Liu
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Shurong Li
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yuhang Chen
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Jiazheng Cao
- Department of Urology, Jiangmen Central Hospital, Jiangmen, China
| | - Zhiling Zhang
- Department of Urology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-Sen University Cancer Center, Guangzhou, China
| | - Zhenhua Chen
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Zihao Feng
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Quanhui Xu
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Jiangquan Zhu
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yinghan Wang
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Yan Guo
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Wei Chen
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China
| | - Caixia Li
- School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou, China
| | - Peixing Li
- School of Mathematics and Computational Science, Sun Yat-Sen University, Guangzhou, China
| | - Huanjun Wang
- Department of Radiology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China.
| | - Junhang Luo
- Department of Urology, The First Affiliated Hospital, Sun Yat-Sen University, Guangzhou, China.
| |
Collapse
|
38
|
Duan H, Wang H, Chen Y, Liu F, Tao L. EAMNet: an Alzheimer's disease prediction model based on representation learning. Phys Med Biol 2023; 68:215005. [PMID: 37774713 DOI: 10.1088/1361-6560/acfec8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2023] [Accepted: 09/29/2023] [Indexed: 10/01/2023]
Abstract
Objective. Brain18F-FDG PET images indicate brain lesions' metabolic status and offer the predictive potential for Alzheimer's disease (AD). However, the complexity of extracting relevant lesion features and dealing with extraneous information in PET images poses challenges for accurate prediction.Approach. To address these issues, we propose an innovative solution called the efficient adaptive multiscale network (EAMNet) for predicting potential patient populations using positron emission tomography (PET) image slices, enabling effective intervention and treatment. Firstly, we introduce an efficient convolutional strategy to enhance the receptive field of PET images during the feature learning process, avoiding excessive extraction of fine tissue features by deep-level networks while reducing the model's computational complexity. Secondly, we construct a channel attention module that enables the prediction model to adaptively allocate weights between different channels, compensating for the spatial noise in PET images' impact on classification. Finally, we use skip connections to merge features from different-scale lesion information. Through visual analysis, the network constructed in this article aligns with the regions of interest of clinical doctors.Main results. Through visualization analysis, our network aligns with regions of interest identified by clinical doctors. Experimental evaluations conducted on the ADNI (Alzheimer's Disease Neuroimaging Initiative) dataset demonstrate the outstanding classification performance of our proposed method. The accuracy rates for AD versus NC (Normal Controls), AD versus MCI (Mild Cognitive Impairment), MCI versus NC, and AD versus MCI versus NC classifications achieve 97.66%, 96.32%, 95.23%, and 95.68%, respectively.Significance. The proposed method surpasses advanced algorithms in the field, providing a hopeful advancement in accurately predicting and classifying Alzheimer's Disease using18F-FDG PET images. The source code has been uploaded tohttps://github.com/Haoliang-D-AHU/EAMNet/tree/master.
Collapse
Affiliation(s)
- Haoliang Duan
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, People's Republic of China
- School of Computer Science and Technology, Anhui University, Hefei, People's Republic of China
| | - Huabin Wang
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, People's Republic of China
- School of Computer Science and Technology, Anhui University, Hefei, People's Republic of China
| | - Yonglin Chen
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, People's Republic of China
- School of Computer Science and Technology, Anhui University, Hefei, People's Republic of China
| | - Fei Liu
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, People's Republic of China
- School of Computer Science and Technology, Anhui University, Hefei, People's Republic of China
| | - Liang Tao
- Anhui Provincial International Joint Research Center for Advanced Technology in Medical Imaging, Anhui University, Hefei, People's Republic of China
- School of Computer Science and Technology, Anhui University, Hefei, People's Republic of China
| |
Collapse
|
39
|
Anjum S, Ahmed I, Asif M, Aljuaid H, Alturise F, Ghadi YY, Elhabob R. Lung Cancer Classification in Histopathology Images Using Multiresolution Efficient Nets. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:7282944. [PMID: 37876944 PMCID: PMC10593544 DOI: 10.1155/2023/7282944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Revised: 11/07/2022] [Accepted: 11/29/2022] [Indexed: 10/26/2023]
Abstract
Histopathological images are very effective for investigating the status of various biological structures and diagnosing diseases like cancer. In addition, digital histopathology increases diagnosis precision and provides better image quality and more detail for the pathologist with multiple viewing options and team annotations. As a result of the benefits above, faster treatment is available, increasing therapy success rates and patient recovery and survival chances. However, the present manual examination of these images is tedious and time-consuming for pathologists. Therefore, reliable automated techniques are needed to effectively classify normal and malignant cancer images. This paper applied a deep learning approach, namely, EfficientNet and its variants from B0 to B7. We used different image resolutions for each model, from 224 × 224 pixels to 600 × 600 pixels. We also applied transfer learning and parameter tuning techniques to improve the results and overcome the overfitting problem. We collected the dataset from the Lung and Colon Cancer Histopathological Image LC25000 image dataset. The dataset acquisition consists of 25,000 histopathology images of five classes (lung adenocarcinoma, lung squamous cell carcinoma, benign lung tissue, colon adenocarcinoma, and colon benign tissue). Then, we performed preprocessing on the dataset to remove the noisy images and bring them into a standard format. The model's performance was evaluated in terms of classification accuracy and loss. We have achieved good accuracy results for all variants; however, the results of EfficientNetB2 stand excellent, with an accuracy of 97% for 260 × 260 pixels resolution images.
Collapse
Affiliation(s)
- Sunila Anjum
- Center of Excellence in Information Technology, Institute of Management Sciences, Hayatabad, Peshawar 25000, Pakistan
| | - Imran Ahmed
- School of Computing and Information Science, Anglia Ruskin University, Cambridge, UK
| | - Muhammad Asif
- Department of Computer Science, National Textile University, Faisalabad, Pakistan
| | - Hanan Aljuaid
- Computer Sciences Department, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University (PNU), P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Fahad Alturise
- Department of Computer, College of Science and Arts in Ar Rass, Qassim University, Ar Rass, Qassim, Saudi Arabia
| | - Yazeed Yasin Ghadi
- Department of Software Engineering/Computer Science, Al Ain University, Al Ain, UAE
| | - Rashad Elhabob
- College of Computer Science and Information Technology, Karary University, Omdurman, Sudan
| |
Collapse
|
40
|
Zhang R, Wang L, Cheng S, Song S. MLP-based classification of COVID-19 and skin diseases. EXPERT SYSTEMS WITH APPLICATIONS 2023; 228:120389. [PMID: 37193247 PMCID: PMC10170962 DOI: 10.1016/j.eswa.2023.120389] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2023] [Revised: 05/03/2023] [Accepted: 05/04/2023] [Indexed: 05/18/2023]
Abstract
Recent years have witnessed a growing interest in neural network-based medical image classification methods, which have demonstrated remarkable performance in this field. Typically, convolutional neural network (CNN) architectures have been commonly employed to extract local features. However, the transformer, a newly emerged architecture, has gained popularity due to its ability to explore the relevance of remote elements in an image through a self-attention mechanism. Despite this, it is crucial to establish not only local connectivity but also remote relationships between lesion features and capture the overall image structure to improve image classification accuracy. Therefore, to tackle the aforementioned issues, this paper proposes a network based on multilayer perceptrons (MLPs) that can learn the local features of medical images on the one hand and capture the overall feature information in both spatial and channel dimensions on the other hand, thus utilizing image features effectively. This paper has been extensively validated on COVID19-CT dataset and ISIC 2018 dataset, and the results show that the method in this paper is more competitive and has higher performance in medical image classification compared with existing methods. This shows that the use of MLP to capture image features and establish connections between lesions is expected to provide novel ideas for medical image classification tasks in the future.
Collapse
Affiliation(s)
- Ruize Zhang
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, Xinjiang, China
| | - Liejun Wang
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, Xinjiang, China
| | - Shuli Cheng
- College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, Xinjiang, China
| | - Shiji Song
- Department of Automation, Tsinghua University, Beijing, 100084, China
| |
Collapse
|
41
|
Ahoor A, Arif F, Sajid MZ, Qureshi I, Abbas F, Jabbar S, Abbas Q. MixNet-LD: An Automated Classification System for Multiple Lung Diseases Using Modified MixNet Model. Diagnostics (Basel) 2023; 13:3195. [PMID: 37892016 PMCID: PMC10606171 DOI: 10.3390/diagnostics13203195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 10/03/2023] [Accepted: 10/04/2023] [Indexed: 10/29/2023] Open
Abstract
The lungs are critical components of the respiratory system because they allow for the exchange of oxygen and carbon dioxide within our bodies. However, a variety of conditions can affect the lungs, resulting in serious health consequences. Lung disease treatment aims to control its severity, which is usually irrevocable. The fundamental objective of this endeavor is to build a consistent and automated approach for establishing the intensity of lung illness. This paper describes MixNet-LD, a unique automated approach aimed at identifying and categorizing the severity of lung illnesses using an upgraded pre-trained MixNet model. One of the first steps in developing the MixNet-LD system was to build a pre-processing strategy that uses Grad-Cam to decrease noise, highlight irregularities, and eventually improve the classification performance of lung illnesses. Data augmentation strategies were used to rectify the dataset's unbalanced distribution of classes and prevent overfitting. Furthermore, dense blocks were used to improve classification outcomes across the four severity categories of lung disorders. In practice, the MixNet-LD model achieves cutting-edge performance while maintaining model size and manageable complexity. The proposed approach was tested using a variety of datasets gathered from credible internet sources as well as a novel private dataset known as Pak-Lungs. A pre-trained model was used on the dataset to obtain important characteristics from lung disease images. The pictures were then categorized into categories such as normal, COVID-19, pneumonia, tuberculosis, and lung cancer using a linear layer of the SVM classifier with a linear activation function. The MixNet-LD system underwent testing in four distinct tests and achieved a remarkable accuracy of 98.5% on the difficult lung disease dataset. The acquired findings and comparisons demonstrate the MixNet-LD system's improved performance and learning capabilities. These findings show that the proposed approach may effectively increase the accuracy of classification models in medicinal image investigations. This research helps to develop new strategies for effective medical image processing in clinical settings.
Collapse
Affiliation(s)
- Ayesha Ahoor
- Department of Computer Software Engineering, MCS, National University of Science and Technology, Islamabad 44000, Pakistan; (A.A.); (F.A.); (M.Z.S.)
| | - Fahim Arif
- Department of Computer Software Engineering, MCS, National University of Science and Technology, Islamabad 44000, Pakistan; (A.A.); (F.A.); (M.Z.S.)
| | - Muhammad Zaheer Sajid
- Department of Computer Software Engineering, MCS, National University of Science and Technology, Islamabad 44000, Pakistan; (A.A.); (F.A.); (M.Z.S.)
| | - Imran Qureshi
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.J.); (Q.A.)
| | - Fakhar Abbas
- Centre for Trusted Internet and Community, National University of Singapore (NUS), Singapore 119228, Singapore;
| | - Sohail Jabbar
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.J.); (Q.A.)
| | - Qaisar Abbas
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.J.); (Q.A.)
| |
Collapse
|
42
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Nuklearmedizin 2023; 62:306-313. [PMID: 37802058 DOI: 10.1055/a-2157-6670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET..
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
43
|
Okita Y, Hirano T, Wang B, Nakashima Y, Minoda S, Nagahara H, Kumanogoh A. Automatic evaluation of atlantoaxial subluxation in rheumatoid arthritis by a deep learning model. Arthritis Res Ther 2023; 25:181. [PMID: 37749583 PMCID: PMC10518918 DOI: 10.1186/s13075-023-03172-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 09/13/2023] [Indexed: 09/27/2023] Open
Abstract
BACKGROUND This work aims to develop a deep learning model, assessing atlantoaxial subluxation (AAS) in rheumatoid arthritis (RA), which can often be ambiguous in clinical practice. METHODS We collected 4691 X-ray images of the cervical spine of the 906 patients with RA. Among these images, 3480 were used for training the deep learning model, 803 were used for validating the model during the training process, and the remaining 408 were used for testing the performance of the trained model. The two-dimensional key points' detection model of Deep High-Resolution Representation Learning for Human Pose Estimation was adopted as the base convolutional neural network model. The model inferred four coordinates to calculate the atlantodental interval (ADI) and space available for the spinal cord (SAC). Finally, these values were compared with those by clinicians to evaluate the performance of the model. RESULTS Among the 408 cervical images for testing the performance, the trained model correctly identified the four coordinates in 99.5% of the dataset. The values of ADI and SAC were positively correlated among the model and two clinicians. The sensitivity of AAS diagnosis with ADI or SAC by the model was 0.86 and 0.97 respectively. The specificity of that was 0.57 and 0.5 respectively. CONCLUSIONS We present the development of a deep learning model for the evaluation of cervical lesions of patients with RA. The model was demonstrably shown to be useful for quantitative evaluation.
Collapse
Affiliation(s)
- Yasutaka Okita
- Department of Respiratory Medicine and Clinical Immunology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan.
| | - Toru Hirano
- Department of Rheumatology, Nishinomiya Municipal Central Hospital, Hyogo, Japan
| | - Bowen Wang
- Osaka University Institute for Datability Science (IDS), Suita, Osaka, Japan
| | - Yuta Nakashima
- Osaka University Institute for Datability Science (IDS), Suita, Osaka, Japan
| | - Saki Minoda
- Department of Respiratory Medicine and Clinical Immunology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
| | - Hajime Nagahara
- Osaka University Institute for Datability Science (IDS), Suita, Osaka, Japan
| | - Atsushi Kumanogoh
- Department of Respiratory Medicine and Clinical Immunology, Osaka University Graduate School of Medicine, 2-2, Yamadaoka, Suita, Osaka, 565-0871, Japan
- Laboratory of Immunopathology, World Premier International Immunology Frontier Research Center, Osaka University, Suita, Osaka, Japan
- The Institute for Open and Transdisciplinary Research Initiatives (OTRI), Osaka, Japan
| |
Collapse
|
44
|
Yan S, Li J, Wang J, Liu G, Ai A, Liu R. A Novel Strategy for Extracting Richer Semantic Information Based on Fault Detection in Power Transmission Lines. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1333. [PMID: 37761632 PMCID: PMC10529342 DOI: 10.3390/e25091333] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 09/07/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023]
Abstract
With the development of the smart grid, the traditional defect detection methods in transmission lines are gradually shifted to the combination of robots or drones and deep learning technology to realize the automatic detection of defects, avoiding the risks and computational costs of manual detection. Lightweight embedded devices such as drones and robots belong to small devices with limited computational resources, while deep learning mostly relies on deep neural networks with huge computational resources. And semantic features of deep networks are richer, which are also critical for accurately classifying morphologically similar defects for detection, helping to identify differences and classify transmission line components. Therefore, we propose a method to obtain advanced semantic features even in shallow networks. Combined with transfer learning, we change the image features (e.g., position and edge connectivity) under self-supervised learning during pre-training. This allows the pre-trained model to learn potential semantic feature representations rather than relying on low-level features. The pre-trained model then directs a shallow network to extract rich semantic features for downstream tasks. In addition, we introduce a category semantic fusion module (CSFM) to enhance feature fusion by utilizing channel attention to capture global and local information lost during compression and extraction. This module helps to obtain more category semantic information. Our experiments on a self-created transmission line defect dataset show the superiority of modifying low-level image information during pre-training when adjusting the number of network layers and embedding of the CSFM. The strategy demonstrates generalization on the publicly available PASCAL VOC dataset. Finally, compared with state-of-the-art methods on the synthetic fog insulator dataset (SFID), the strategy achieves comparable performance with much smaller network depths.
Collapse
Affiliation(s)
- Shuxia Yan
- School of Electronics and Information Engineering, Tiangong University, Tianjin 300387, China
| | - Junhuan Li
- School of Electronics and Information Engineering, Tiangong University, Tianjin 300387, China
| | - Jiachen Wang
- College of Mechanical and Electronic Engineering, Northwest A&F University, Xianyang 712100, China;
| | - Gaohua Liu
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;
| | - Anhai Ai
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China;
| | - Rui Liu
- School of Software, Tiangong University, Tianjin 300387, China
| |
Collapse
|
45
|
Riaz Z, Khan B, Abdullah S, Khan S, Islam MS. Lung Tumor Image Segmentation from Computer Tomography Images Using MobileNetV2 and Transfer Learning. Bioengineering (Basel) 2023; 10:981. [PMID: 37627866 PMCID: PMC10451633 DOI: 10.3390/bioengineering10080981] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/14/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND Lung cancer is one of the most fatal cancers worldwide, and malignant tumors are characterized by the growth of abnormal cells in the tissues of lungs. Usually, symptoms of lung cancer do not appear until it is already at an advanced stage. The proper segmentation of cancerous lesions in CT images is the primary method of detection towards achieving a completely automated diagnostic system. METHOD In this work, we developed an improved hybrid neural network via the fusion of two architectures, MobileNetV2 and UNET, for the semantic segmentation of malignant lung tumors from CT images. The transfer learning technique was employed and the pre-trained MobileNetV2 was utilized as an encoder of a conventional UNET model for feature extraction. The proposed network is an efficient segmentation approach that performs lightweight filtering to reduce computation and pointwise convolution for building more features. Skip connections were established with the Relu activation function for improving model convergence to connect the encoder layers of MobileNetv2 to decoder layers in UNET that allow the concatenation of feature maps with different resolutions from the encoder to decoder. Furthermore, the model was trained and fine-tuned on the training dataset acquired from the Medical Segmentation Decathlon (MSD) 2018 Challenge. RESULTS The proposed network was tested and evaluated on 25% of the dataset obtained from the MSD, and it achieved a dice score of 0.8793, recall of 0.8602 and precision of 0.93. It is pertinent to mention that our technique outperforms the current available networks, which have several phases of training and testing.
Collapse
Affiliation(s)
- Zainab Riaz
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
| | - Bangul Khan
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
- Department of Biomedical Engineering, City University Hongkong, Hong Kong SAR, China
| | - Saad Abdullah
- Division of Intelligent Future Technologies, School of Innovation, Design and Engineering, Mälardalen University, P.O. Box 883, 721 23 Västerås, Sweden
| | - Samiullah Khan
- Center for Eye & Vision Research, 17W Science Park, Hong Kong SAR, China;
| | - Md Shohidul Islam
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
| |
Collapse
|
46
|
Cai GW, Liu YB, Feng QJ, Liang RH, Zeng QS, Deng Y, Yang W. Semi-Supervised Segmentation of Interstitial Lung Disease Patterns from CT Images via Self-Training with Selective Re-Training. Bioengineering (Basel) 2023; 10:830. [PMID: 37508857 PMCID: PMC10375953 DOI: 10.3390/bioengineering10070830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 06/22/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023] Open
Abstract
Accurate segmentation of interstitial lung disease (ILD) patterns from computed tomography (CT) images is an essential prerequisite to treatment and follow-up. However, it is highly time-consuming for radiologists to pixel-by-pixel segment ILD patterns from CT scans with hundreds of slices. Consequently, it is hard to obtain large amounts of well-annotated data, which poses a huge challenge for data-driven deep learning-based methods. To alleviate this problem, we propose an end-to-end semi-supervised learning framework for the segmentation of ILD patterns (ESSegILD) from CT images via self-training with selective re-training. The proposed ESSegILD model is trained using a large CT dataset with slice-wise sparse annotations, i.e., only labeling a few slices in each CT volume with ILD patterns. Specifically, we adopt a popular semi-supervised framework, i.e., Mean-Teacher, that consists of a teacher model and a student model and uses consistency regularization to encourage consistent outputs from the two models under different perturbations. Furthermore, we propose introducing the latest self-training technique with a selective re-training strategy to select reliable pseudo-labels generated by the teacher model, which are used to expand training samples to promote the student model during iterative training. By leveraging consistency regularization and self-training with selective re-training, our proposed ESSegILD can effectively utilize unlabeled data from a partially annotated dataset to progressively improve the segmentation performance. Experiments are conducted on a dataset of 67 pneumonia patients with incomplete annotations containing over 11,000 CT images with eight different lung patterns of ILDs, with the results indicating that our proposed method is superior to the state-of-the-art methods.
Collapse
Affiliation(s)
- Guang-Wei Cai
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Yun-Bi Liu
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Qian-Jin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Rui-Hong Liang
- Department of Medical Imaging Center, Nanfang Hospital of Southern Medical University, Guangzhou 510515, China
| | - Qing-Si Zeng
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Yu Deng
- Department of Radiology, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou 510120, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| |
Collapse
|
47
|
Li H, Yang Z. Torsional nystagmus recognition based on deep learning for vertigo diagnosis. Front Neurosci 2023; 17:1160904. [PMID: 37360163 PMCID: PMC10288185 DOI: 10.3389/fnins.2023.1160904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 05/22/2023] [Indexed: 06/28/2023] Open
Abstract
Introduction Detection of torsional nystagmus can help identify the canal of origin in benign paroxysmal positional vertigo (BPPV). Most currently available pupil trackers do not detect torsional nystagmus. In view of this, a new deep learning network model was designed for the determination of torsional nystagmus. Methods The data set comes from the Eye, Ear, Nose and Throat (Eye&ENT) Hospital of Fudan University. In the process of data acquisition, the infrared videos were obtained from eye movement recorder. The dataset contains 24521 nystagmus videos. All torsion nystagmus videos were annotated by the ophthalmologist of the hospital. 80% of the data set was used to train the model, and 20% was used to test. Results Experiments indicate that the designed method can effectively identify torsional nystagmus. Compared with other methods, it has high recognition accuracy. It can realize the automatic recognition of torsional nystagmus and provides support for the posterior and anterior canal BPPV diagnosis. Discussion Our present work complements existing methods of 2D nystagmus analysis and could improve the diagnostic capabilities of VNG in multiple vestibular disorders. To automatically pick BPV requires detection of nystagmus in all 3 planes and identification of a paroxysm. This is the next research work to be carried out.
Collapse
|
48
|
Mathur G, Pandey A, Goyal S. A review on blockchain for DNA sequence: security issues, application in DNA classification, challenges and future trends. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-23. [PMID: 37362738 PMCID: PMC10209554 DOI: 10.1007/s11042-023-15857-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 03/09/2023] [Accepted: 05/15/2023] [Indexed: 06/28/2023]
Abstract
In biological science, the study of DNA sequences is considered an important factor because it carries the genomic details that can be used by researchers and doctors for the early prediction of disease using DNA classification. The NCBI has the world's largest database of genetic sequences, but the security of this massive amount of data is currently the greatest issue. One of the options is to encrypt these genetic sequences using blockchain technology. As a result, this paper presents a survey on healthcare data breaches, the necessity for blockchain in healthcare, and the number of research studies done in this area. In addition, the report suggests DNA sequence classification for earlier disease identification and evaluates previous work in the field.
Collapse
Affiliation(s)
- Garima Mathur
- Department of Computer Science and Engineering, UIT, RGPV, Bhopal, India
| | - Anjana Pandey
- Department of Information Technology, UIT, RGPV, Bhopal, India
| | - Sachin Goyal
- Department of Information Technology, UIT, RGPV, Bhopal, India
| |
Collapse
|
49
|
Tong Y, Jie B, Wang X, Xu Z, Ding P, He Y. Is Convolutional Neural Network Accurate for Automatic Detection of Zygomatic Fractures on Computed Tomography? J Oral Maxillofac Surg 2023:S0278-2391(23)00393-2. [PMID: 37217163 DOI: 10.1016/j.joms.2023.04.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 03/29/2023] [Accepted: 04/23/2023] [Indexed: 05/24/2023]
Abstract
PURPOSE Zygomatic fractures involve complex anatomical structures of the mid-face and the diagnosis can be challenging and labor-consuming. This research is aimed to evaluate the performance of an automatic algorithm for the detection of zygomatic fractures based on convolutional neural network (CNN) on spiral computed tomography (CT). MATERIALS AND METHODS We designed a cross-sectional retrospective diagnostic trial study. Clinical records and CT scans of patients with zygomatic fractures were reviewed. The sample consisted of two types of patients with different zygomatic fractures statuses (positive or negative) in Peking University School of Stomatology from 2013 to 2019. All CT samples were randomly divided into three groups at a ratio of 6:2:2 as training set, validation set, and test set, respectively. All CT scans were viewed and annotated by three experienced maxillofacial surgeons, serving as the gold standard. The algorithm consisted of two modules as follows: (1) segmentation of the zygomatic region of CT based on U-Net, a type of CNN model; (2) detection of fractures based on Deep Residual Network 34(ResNet34). The region segmentation model was used first to detect and extract the zygomatic region, then the detection model was used to detect the fracture status. The Dice coefficient was used to evaluate the performance of the segmentation algorithm. The sensitivity and specificity were used to assess the performance of the detection model. The covariates included age, gender, duration of injury, and the etiology of fractures. RESULTS A total of 379 patients with an average age of 35.43 ± 12.74 years were included in the study. There were 203 nonfracture patients and 176 fracture patients with 220 sites of zygomatic fractures (44 patients underwent bilateral fractures). The Dice coefficientof zygomatic region detection model and gold standard verified by manual labeling were 0.9337 (coronal plane) and 0.9269 (sagittal plane), respectively. The sensitivity and specificity of the fracture detection model were 100% (p>.05). CONCLUSION The performance of the algorithm based on CNNs was not statistically different from the gold standard (manual diagnosis) for zygomatic fracture detection in order for the algorithm to be applied clinically.
Collapse
Affiliation(s)
- Yanhang Tong
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | - Bimeng Jie
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | - Xuebing Wang
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China
| | | | | | - Yang He
- Department of Oral and Maxillofacial Surgery, Peking University School and Hospital of Stomatology; National Engineering Laboratory for Digital and Material Technology of Stomatology; Beijing Key Laboratory of Digital Stomatology; National Clinical Research Center for Oral Diseases, Beijing, China.
| |
Collapse
|
50
|
Pan X, Cong H, Wang X, Zhang H, Ge Y, Hu S. Deep learning-extracted CT imaging phenotypes predict response to total resection in colorectal cancer. Acta Radiol 2023; 64:1783-1791. [PMID: 36762417 DOI: 10.1177/02841851231152685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
BACKGROUND Deep learning surpasses many traditional methods for many vision tasks, allowing the transformation of hierarchical features into more abstract, high-level features. PURPOSE To evaluate the prognostic value of preoperative computed tomography (CT) image texture features and deep learning self-learning high-throughput features (SHF) on postoperative overall survival in the treatment of patients with colorectal cancer (CRC). MATERIAL AND METHODS The dataset consisted of 810 enrolled patients with CRC confirmed from 10 November 2011 to 10 February 2018. In contrast, SHF extracted by deep learning with multi-task training mechanism and texture features were extracted from the CT with tumor volume region of interest, respectively, and combined with the Cox proportional hazard (CoxPH) model for initial validation to obtain a RAD score to classify patients into high- and low-risk groups. The SHF stability was further validated in combination with Neural Multi-Task Logistic Regression (N-MTLR) model. The overall recognition ability and accuracy of CoxPH and N-MTLR model were evaluated by C-index and Integrated Brier Score (IBS). RESULTS SHF had a more significant degree of differentiation than texture features. The result is (SHF vs. texture features: C-index: 0.884 vs. 0.611; IBS: 0.025 vs. 0.073) in the CoxPH model, and (SHF vs. texture features: C-index: 0.861 vs. 0.630; IBS: 0.024 vs. 0.065) in N-MTLR. CONCLUSION SHF is superior to texture features and has potential application for the preoperative prediction of the individualized treatment of CRC.
Collapse
Affiliation(s)
- Xiang Pan
- The School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, PR China
- Faculty of Health Sciences, University of Macau, Macau, PR China
| | - He Cong
- The School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, PR China
| | - Xiaolei Wang
- The School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, PR China
| | - Heng Zhang
- Department of Radiology, Affiliated Hospital of Jiangnan University, Wuxi, Jiangsu, PR China
| | - Yuxi Ge
- Department of Radiology, Affiliated Hospital of Jiangnan University, Wuxi, Jiangsu, PR China
| | - Shudong Hu
- Department of Radiology, Affiliated Hospital of Jiangnan University, Wuxi, Jiangsu, PR China
| |
Collapse
|