1
|
Dai Q, Tao Y, Liu D, Zhao C, Sui D, Xu J, Shi T, Leng X, Lu M. Ultrasound radiomics models based on multimodal imaging feature fusion of papillary thyroid carcinoma for predicting central lymph node metastasis. Front Oncol 2023; 13:1261080. [PMID: 38023240 PMCID: PMC10643192 DOI: 10.3389/fonc.2023.1261080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 10/09/2023] [Indexed: 12/01/2023] Open
Abstract
Objective This retrospective study aimed to establish ultrasound radiomics models to predict central lymph node metastasis (CLNM) based on preoperative multimodal ultrasound imaging features fusion of primary papillary thyroid carcinoma (PTC). Methods In total, 498 cases of unifocal PTC were randomly divided into two sets which comprised 348 cases (training set) and 150 cases (validition set). In addition, the testing set contained 120 cases of PTC at different times. Post-operative histopathology was the gold standard for CLNM. The following steps were used to build models: the regions of interest were segmented in PTC ultrasound images, multimodal ultrasound image features were then extracted by the deep learning residual neural network with 50-layer network, followed by feature selection and fusion; subsequently, classification was performed using three classical classifiers-adaptive boosting (AB), linear discriminant analysis (LDA), and support vector machine (SVM). The performances of the unimodal models (Unimodal-AB, Unimodal-LDA, and Unimodal-SVM) and the multimodal models (Multimodal-AB, Multimodal-LDA, and Multimodal-SVM) were evaluated and compared. Results The Multimodal-SVM model achieved the best predictive performance than the other models (P < 0.05). For the Multimodal-SVM model validation and testing sets, the areas under the receiver operating characteristic curves (AUCs) were 0.910 (95% CI, 0.894-0.926) and 0.851 (95% CI, 0.833-0.869), respectively. The AUCs of the Multimodal-SVM model were 0.920 (95% CI, 0.881-0.959) in the cN0 subgroup-1 cases and 0.828 (95% CI, 0.769-0.887) in the cN0 subgroup-2 cases. Conclusion The ultrasound radiomics model only based on the PTC multimodal ultrasound image have high clinical value in predicting CLNM and can provide a reference for treatment decisions.
Collapse
Affiliation(s)
- Quan Dai
- Department of Ultrasound, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Medicine & Laboratory of Translational Research in Ultrasound Theranostics, Chengdu, China
| | - Yi Tao
- Department of Ultrasound, West China Hospital of Sichuan University, Chengdu, China
| | - Dongmei Liu
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Chen Zhao
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Dong Sui
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, China
- School of Electrical and Information Engineering, Beijing University of Civil Engineering and Architecture, Beijing, China
| | - Jinshun Xu
- Department of Ultrasound, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Medicine & Laboratory of Translational Research in Ultrasound Theranostics, Chengdu, China
| | - Tiefeng Shi
- Department of General Surgery, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Xiaoping Leng
- Department of Ultrasound, The Second Affiliated Hospital of Harbin Medical University, Harbin, Heilongjiang, China
| | - Man Lu
- Department of Ultrasound, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Medicine & Laboratory of Translational Research in Ultrasound Theranostics, Chengdu, China
| |
Collapse
|
2
|
Arslan M, Haider A, Khurshid M, Abu Bakar SSU, Jani R, Masood F, Tahir T, Mitchell K, Panchagnula S, Mandair S. From Pixels to Pathology: Employing Computer Vision to Decode Chest Diseases in Medical Images. Cureus 2023; 15:e45587. [PMID: 37868395 PMCID: PMC10587792 DOI: 10.7759/cureus.45587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/19/2023] [Indexed: 10/24/2023] Open
Abstract
Radiology has been a pioneer in the healthcare industry's digital transformation, incorporating digital imaging systems like picture archiving and communication system (PACS) and teleradiology over the past thirty years. This shift has reshaped radiology services, positioning the field at a crucial junction for potential evolution into an integrated diagnostic service through artificial intelligence and machine learning. These technologies offer advanced tools for radiology's transformation. The radiology community has advanced computer-aided diagnosis (CAD) tools using machine learning techniques, notably deep learning convolutional neural networks (CNNs), for medical image pattern recognition. However, the integration of CAD tools into clinical practice has been hindered by challenges in workflow integration, unclear business models, and limited clinical benefits, despite development dating back to the 1990s. This comprehensive review focuses on detecting chest-related diseases through techniques like chest X-rays (CXRs), magnetic resonance imaging (MRI), nuclear medicine, and computed tomography (CT) scans. It examines the utilization of computer-aided programs by researchers for disease detection, addressing key areas: the role of computer-aided programs in disease detection advancement, recent developments in MRI, CXR, radioactive tracers, and CT scans for chest disease identification, research gaps for more effective development, and the incorporation of machine learning programs into diagnostic tools.
Collapse
Affiliation(s)
- Muhammad Arslan
- Department of Emergency Medicine, Royal Infirmary of Edinburgh, National Health Service (NHS) Lothian, Edinburgh, GBR
| | - Ali Haider
- Department of Allied Health Sciences, The University of Lahore, Gujrat Campus, Gujrat, PAK
| | - Mohsin Khurshid
- Department of Microbiology, Government College University Faisalabad, Faisalabad, PAK
| | | | - Rutva Jani
- Department of Internal Medicine, C. U. Shah Medical College and Hospital, Gujarat, IND
| | - Fatima Masood
- Department of Internal Medicine, Gulf Medical University, Ajman, ARE
| | - Tuba Tahir
- Department of Business Administration, Iqra University, Karachi, PAK
| | - Kyle Mitchell
- Department of Internal Medicine, University of Science, Arts and Technology, Olveston, MSR
| | - Smruthi Panchagnula
- Department of Internal Medicine, Ganni Subbalakshmi Lakshmi (GSL) Medical College, Hyderabad, IND
| | - Satpreet Mandair
- Department of Internal Medicine, Medical University of the Americas, Charlestown, KNA
| |
Collapse
|
3
|
Wali A, Ali S, Naseer A, Karim S, Alamgir Z. Computer-aided COVID-19 diagnosis: a possibility? J EXP THEOR ARTIF IN 2023. [DOI: 10.1080/0952813x.2023.2165722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Affiliation(s)
- Aamir Wali
- FAST School of Computing, National University of Computer and Emerging Sciences, Faisal Town, Lahore, Pakistan
| | - Shahroze Ali
- FAST School of Computing, National University of Computer and Emerging Sciences, Faisal Town, Lahore, Pakistan
| | - Asma Naseer
- FAST School of Computing, National University of Computer and Emerging Sciences, Faisal Town, Lahore, Pakistan
| | - Saira Karim
- FAST School of Computing, National University of Computer and Emerging Sciences, Faisal Town, Lahore, Pakistan
| | - Zareen Alamgir
- FAST School of Computing, National University of Computer and Emerging Sciences, Faisal Town, Lahore, Pakistan
| |
Collapse
|
4
|
COVID-19 Chest X-ray Classification and Severity Assessment Using Convolutional and Transformer Neural Networks. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12104861] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
The coronavirus pandemic started in Wuhan, China in December 2019, and put millions of people in a difficult situation. This fatal virus spread to over 227 countries and the number of infected patients increased to over 400 million cases, causing over 6 million deaths worldwide. Due to the serious consequence of this virus, it is necessary to develop a detection method that can respond quickly to prevent the spreading of COVID-19. Using chest X-ray images to detect COVID-19 is one of the promising techniques; however, with a large number of COVID-19 infected cases every day, the number of radiologists available to diagnose the chest X-ray images is not sufficient. We must have a computer aid system that helps doctors instantly and automatically determine COVID-19 cases. Recently, with the emergence of deep learning methods applied for medical and biomedical uses, using convolutional neural net and transformer applications for chest X-ray images can be a supplement for COVID-19 testing. In this paper, we attempt to classify three types of chest X-ray, which are normal, pneumonia, and COVID-19 using deep learning methods on a customized dataset. We also carry out an experiment on the COVID-19 severity assessment task using a tailored dataset. Five deep learning models were obtained to conduct our experiments: DenseNet121, ResNet50, InceptionNet, Swin Transformer, and Hybrid EfficientNet-DOLG neural networks. The results indicated that chest X-ray and deep learning could be reliable methods for supporting doctors in COVID-19 identification and severity assessment tasks.
Collapse
|
5
|
Subhalakshmi RT, Balamurugan SAA, Sasikala S. Deep learning based fusion model for COVID-19 diagnosis and classification using computed tomography images. CONCURRENT ENGINEERING, RESEARCH, AND APPLICATIONS 2022; 30:116-127. [PMID: 35382156 PMCID: PMC8968394 DOI: 10.1177/1063293x211021435] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Recently, the COVID-19 pandemic becomes increased in a drastic way, with the availability of a limited quantity of rapid testing kits. Therefore, automated COVID-19 diagnosis models are essential to identify the existence of disease from radiological images. Earlier studies have focused on the development of Artificial Intelligence (AI) techniques using X-ray images on COVID-19 diagnosis. This paper aims to develop a Deep Learning Based MultiModal Fusion technique called DLMMF for COVID-19 diagnosis and classification from Computed Tomography (CT) images. The proposed DLMMF model operates on three main processes namely Weiner Filtering (WF) based pre-processing, feature extraction and classification. The proposed model incorporates the fusion of deep features using VGG16 and Inception v4 models. Finally, Gaussian Naïve Bayes (GNB) based classifier is applied for identifying and classifying the test CT images into distinct class labels. The experimental validation of the DLMMF model takes place using open-source COVID-CT dataset, which comprises a total of 760 CT images. The experimental outcome defined the superior performance with the maximum sensitivity of 96.53%, specificity of 95.81%, accuracy of 96.81% and F-score of 96.73%.
Collapse
Affiliation(s)
- RT Subhalakshmi
- Department of Information Technology, Sethu Institute of Technology, Virudhunagar, Tamil Nadu, India
| | - S Appavu alias Balamurugan
- Department of Computer Science, Central University of Tamil Nadu, Thiruvarur, Tamil Nadu, India
- S Appavu alias Balamurugan, Department of Computer Science, Central University of Tamil Nadu, Thiruvarur – 610 005, Tamilnadu, India.
| | - S Sasikala
- Department of Computer Science and Engineering, Velammal College of Engineering and Technology, Madurai, Tamil Nadu, India
| |
Collapse
|
6
|
Ha YJ, Lee G, Yoo M, Jung S, Yoo S, Kim J. Feasibility study of multi-site split learning for privacy-preserving medical systems under data imbalance constraints in COVID-19, X-ray, and cholesterol dataset. Sci Rep 2022; 12:1534. [PMID: 35087165 PMCID: PMC8795162 DOI: 10.1038/s41598-022-05615-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 01/11/2022] [Indexed: 11/09/2022] Open
Abstract
It seems as though progressively more people are in the race to upload content, data, and information online; and hospitals haven't neglected this trend either. Hospitals are now at the forefront for multi-site medical data sharing to provide ground-breaking advancements in the way health records are shared and patients are diagnosed. Sharing of medical data is essential in modern medical research. Yet, as with all data sharing technology, the challenge is to balance improved treatment with protecting patient's personal information. This paper provides a novel split learning algorithm coined the term, "multi-site split learning", which enables a secure transfer of medical data between multiple hospitals without fear of exposing personal data contained in patient records. It also explores the effects of varying the number of end-systems and the ratio of data-imbalance on the deep learning performance. A guideline for the most optimal configuration of split learning that ensures privacy of patient data whilst achieving performance is empirically given. We argue the benefits of our multi-site split learning algorithm, especially regarding the privacy preserving factor, using CT scans of COVID-19 patients, X-ray bone scans, and cholesterol level medical data.
Collapse
Affiliation(s)
- Yoo Jeong Ha
- Korea University, School of Electrical Engineering, Seoul, 02841, Republic of Korea
| | - Gusang Lee
- Korea University, School of Electrical Engineering, Seoul, 02841, Republic of Korea
| | - Minjae Yoo
- Korea University, School of Electrical Engineering, Seoul, 02841, Republic of Korea
| | - Soyi Jung
- Hallym University, School of Software, Chuncheon, 24252, Republic of Korea.
| | - Seehwan Yoo
- Department of Mobile Systems Engineering, Dankook University, Yongin, 16890, Republic of Korea.
| | - Joongheon Kim
- Korea University, School of Electrical Engineering, Seoul, 02841, Republic of Korea.
| |
Collapse
|
7
|
Rajesh Kannan S, Sivakumar J, Ezhilarasi P. Automatic detection of COVID-19 in chest radiographs using serially concatenated deep and handcrafted features. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2022; 30:231-244. [PMID: 34924434 DOI: 10.3233/xst-211050] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Since the infectious disease occurrence rate in the human community is gradually rising due to varied reasons, appropriate diagnosis and treatments are essential to control its spread. The recently discovered COVID-19 is one of the contagious diseases, which infected numerous people globally. This contagious disease is arrested by several diagnoses and handling actions. Medical image-supported diagnosis of COVID-19 infection is an approved clinical practice. This research aims to develop a new Deep Learning Method (DLM) to detect the COVID-19 infection using the chest X-ray. The proposed work implemented two methods namely, detection of COVID-19 infection using (i) a Firefly Algorithm (FA) optimized deep-features and (ii) the combined deep and machine features optimized with FA. In this work, a 5-fold cross-validation method is engaged to train and test detection methods. The performance of this system is analyzed individually resulting in the confirmation that the deep feature-based technique helps to achieve a detection accuracy of > 92% with SVM-RBF classifier and combining deep and machine features achieves > 96% accuracy with Fine KNN classifier. In the future, this technique may have potential to play a vital role in testing and validating the X-ray images collected from patients suffering from the infection diseases.
Collapse
Affiliation(s)
| | - J Sivakumar
- St. Joseph's College of Engineering, OMR, Chennai, India
| | - P Ezhilarasi
- St. Joseph's College of Engineering, OMR, Chennai, India
| |
Collapse
|