51
|
Lin CL, Wu MH, Ho YH, Lin FY, Lu YH, Hsueh YY, Chen CC. Multispectral Imaging-Based System for Detecting Tissue Oxygen Saturation With Wound Segmentation for Monitoring Wound Healing. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2024; 12:468-479. [PMID: 38899145 PMCID: PMC11186648 DOI: 10.1109/jtehm.2024.3399232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 04/13/2024] [Accepted: 05/07/2024] [Indexed: 06/21/2024]
Abstract
OBJECTIVE Blood circulation is an important indicator of wound healing. In this study, a tissue oxygen saturation detecting (TOSD) system that is based on multispectral imaging (MSI) is proposed to quantify the degree of tissue oxygen saturation (StO2) in cutaneous tissues. METHODS A wound segmentation algorithm is used to segment automatically wound and skin areas, eliminating the need for manual labeling and applying adaptive tissue optics. Animal experiments were conducted on six mice in which they were observed seven times, once every two days. The TOSD system illuminated cutaneous tissues with two wavelengths of light - red ([Formula: see text] nm) and near-infrared ([Formula: see text] nm), and StO2 levels were calculated using images that were captured using a monochrome camera. The wound segmentation algorithm using ResNet34-based U-Net was integrated with computer vision techniques to improve its performance. RESULTS Animal experiments revealed that the wound segmentation algorithm achieved a Dice score of 93.49%. The StO2 levels that were determined using the TOSD system varied significantly among the phases of wound healing. Changes in StO2 levels were detected before laser speckle contrast imaging (LSCI) detected changes in blood flux. Moreover, statistical features that were extracted from the TOSD system and LSCI were utilized in principal component analysis (PCA) to visualize different wound healing phases. The average silhouette coefficients of the TOSD system with segmentation (ResNet34-based U-Net) and LSCI were 0.2890 and 0.0194, respectively. CONCLUSION By detecting the StO2 levels of cutaneous tissues using the TOSD system with segmentation, the phases of wound healing were accurately distinguished. This method can support medical personnel in conducting precise wound assessments. Clinical and Translational Impact Statement-This study supports efforts in monitoring StO2 levels, wound segmentation, and wound healing phase classification to improve the efficiency and accuracy of preclinical research in the field.
Collapse
Affiliation(s)
- Chih-Lung Lin
- Department of Electrical EngineeringNational Cheng Kung UniversityTainan70101Taiwan
| | - Meng-Hsuan Wu
- Department of Electrical EngineeringNational Cheng Kung UniversityTainan70101Taiwan
| | - Yuan-Hao Ho
- Department of Electrical EngineeringNational Cheng Kung UniversityTainan70101Taiwan
| | - Fang-Yi Lin
- Department of Electrical EngineeringNational Cheng Kung UniversityTainan70101Taiwan
| | - Yu-Hsien Lu
- Department of Electrical EngineeringNational Cheng Kung UniversityTainan70101Taiwan
| | - Yuan-Yu Hsueh
- Division of Plastic and Reconstructive SurgeryNational Cheng Kung University HospitalTainan70428Taiwan
- Department of SurgeryNational Cheng Kung University HospitalTainan70428Taiwan
| | - Chia-Chen Chen
- Department of Electrical EngineeringNational Cheng Kung UniversityTainan70101Taiwan
| |
Collapse
|
52
|
Khodadadi Shoushtari F, Dehkordi ANV, Sina S. Quantitative and Visual Analysis of Data Augmentation and Hyperparameter Optimization in Deep Learning-Based Segmentation of Low-Grade Glioma Tumors Using Grad-CAM. Ann Biomed Eng 2024; 52:1359-1377. [PMID: 38409433 DOI: 10.1007/s10439-024-03461-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Accepted: 01/29/2024] [Indexed: 02/28/2024]
Abstract
This study executes a quantitative and visual investigation on the effectiveness of data augmentation and hyperparameter optimization on the accuracy of deep learning-based segmentation of LGG tumors. The study employed the MobileNetV2 and ResNet backbones with atrous convolution in DeepLabV3+ structure. The Grad-CAM tool was also used to interpret the effect of augmentation and network optimization on segmentation performance. A wide investigation was performed to optimize the network hyperparameters. In addition, the study examined 35 different models to evaluate different data augmentation techniques. The results of the study indicated that incorporating data augmentation techniques and optimization can improve the performance of segmenting brain LGG tumors up to 10%. Our extensive investigation of the data augmentation techniques indicated that enlargement of data from 90° and 225° rotated data,up to down and left to right flipping are the most effective techniques. MobilenetV2 as the backbone,"Focal Loss" as the loss function and "Adam" as the optimizer showed the superior results. The optimal model (DLG-Net) achieved an overall accuracy of 96.1% with a loss value of 0.006. Specifically, the segmentation performance for Whole Tumor (WT), Tumor Core (TC), and Enhanced Tumor (ET) reached a Dice Similarity Coefficient (DSC) of 89.4%, 70.1%, and 49.9%, respectively. Simultaneous visual and quantitative assessment of data augmentation and network optimization can lead to an optimal model with a reasonable performance in segmenting the LGG tumors.
Collapse
Affiliation(s)
| | - Azimeh N V Dehkordi
- Department of Physics, Faculty of Computer Engineering, Najafabad Branch, Islamic Azad University, Najafabad, Iran.
- Najafabad Branch, Islamic Azad University, Najafabad, 8514143131, Iran.
| | - Sedigheh Sina
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran
- Radiation Research Center, Shiraz University, Shiraz, Iran
| |
Collapse
|
53
|
Turk O, Ozhan D, Acar E, Akinci TC, Yilmaz M. Automatic detection of brain tumors with the aid of ensemble deep learning architectures and class activation map indicators by employing magnetic resonance images. Z Med Phys 2024; 34:278-290. [PMID: 36593139 PMCID: PMC11156777 DOI: 10.1016/j.zemedi.2022.11.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2022] [Accepted: 11/25/2022] [Indexed: 01/01/2023]
Abstract
Today, as in every life-threatening disease, early diagnosis of brain tumors plays a life-saving role. The brain tumor is formed by the transformation of brain cells from their normal structures into abnormal cell structures. These formed abnormal cells begin to form in masses in the brain regions. Nowadays, many different techniques are employed to detect these tumor masses, and the most common of these techniques is Magnetic Resonance Imaging (MRI). In this study, it is aimed to automatically detect brain tumors with the help of ensemble deep learning architectures (ResNet50, VGG19, InceptionV3 and MobileNet) and Class Activation Maps (CAMs) indicators by employing MRI images. The proposed system was implemented in three stages. In the first stage, it was determined whether there was a tumor in the MR images (Binary Approach). In the second stage, different tumor types (Normal, Glioma Tumor, Meningioma Tumor, Pituitary Tumor) were detected from MR images (Multi-class Approach). In the last stage, CAMs of each tumor group were created as an alternative tool to facilitate the work of specialists in tumor detection. The results showed that the overall accuracy of the binary approach was calculated as 100% on the ResNet50, InceptionV3 and MobileNet architectures, and 99.71% on the VGG19 architecture. Moreover, the accuracy values of 96.45% with ResNet50, 93.40% with VGG19, 85.03% with InceptionV3 and 89.34% with MobileNet architectures were obtained in the multi-class approach.
Collapse
Affiliation(s)
- Omer Turk
- Department of Computer Programming, Vocational School, Mardin Artuklu University, 47500 Mardin, Turkey.
| | - Davut Ozhan
- Department of Electronics, Vocational School, Mardin Artuklu University, 47500 Mardin, Turkey.
| | - Emrullah Acar
- Department of Electrical-Electronics Engineering, Architecture and Engineering Faculty, Batman University, Batman, Turkey.
| | - Tahir Cetin Akinci
- WCGEC, University of California Riverside, Riverside, CA, USA; Department of Electrical Engineering, Istanbul Technical University, Istanbul, Turkey.
| | - Musa Yilmaz
- Department of Electrical-Electronics Engineering, Architecture and Engineering Faculty, Batman University, Batman, Turkey.
| |
Collapse
|
54
|
Zhang H, Lin F, Zheng T, Gao J, Wang Z, Zhang K, Zhang X, Xu C, Zhao F, Xie H, Li Q, Cao K, Gu Y, Mao N. Artificial intelligence-based classification of breast lesion from contrast enhanced mammography: a multicenter study. Int J Surg 2024; 110:2593-2603. [PMID: 38748500 PMCID: PMC11093474 DOI: 10.1097/js9.0000000000001076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2023] [Accepted: 12/24/2023] [Indexed: 05/19/2024]
Abstract
PURPOSE The authors aimed to establish an artificial intelligence (AI)-based method for preoperative diagnosis of breast lesions from contrast enhanced mammography (CEM) and to explore its biological mechanism. MATERIALS AND METHODS This retrospective study includes 1430 eligible patients who underwent CEM examination from June 2017 to July 2022 and were divided into a construction set (n=1101), an internal test set (n=196), and a pooled external test set (n=133). The AI model adopted RefineNet as a backbone network, and an attention sub-network, named convolutional block attention module (CBAM), was built upon the backbone for adaptive feature refinement. An XGBoost classifier was used to integrate the refined deep learning features with clinical characteristics to differentiate benign and malignant breast lesions. The authors further retrained the AI model to distinguish in situ and invasive carcinoma among breast cancer candidates. RNA-sequencing data from 12 patients were used to explore the underlying biological basis of the AI prediction. RESULTS The AI model achieved an area under the curve of 0.932 in diagnosing benign and malignant breast lesions in the pooled external test set, better than the best-performing deep learning model, radiomics model, and radiologists. Moreover, the AI model has also achieved satisfactory results (an area under the curve from 0.788 to 0.824) for the diagnosis of in situ and invasive carcinoma in the test sets. Further, the biological basis exploration revealed that the high-risk group was associated with the pathways such as extracellular matrix organization. CONCLUSIONS The AI model based on CEM and clinical characteristics had good predictive performance in the diagnosis of breast lesions.
Collapse
Affiliation(s)
- Haicheng Zhang
- Big Data and Artificial Intelligence Laboratory
- Department of Radiology
| | | | | | | | | | | | - Xiang Zhang
- Department of Radiology, Sun Yat-Sen Memorial Hospital, Sun Yat-Sen University, Guangzhou, Guangdong
| | - Cong Xu
- Physical Examination Center, Yantai Yuhuangding Hospital, Qingdao University
| | - Feng Zhao
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai
| | | | - Qin Li
- Department of Radiology, Weifang Hospital of Traditional Chinese Medicine, Weifang, Shandong
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai
| | - Kun Cao
- Department of Radiology, Beijing Cancer Hospital, Beijing, P. R. China
| | - Yajia Gu
- Department of Radiology, Fudan University Shanghai Cancer Center, Shanghai
| | - Ning Mao
- Big Data and Artificial Intelligence Laboratory
- Department of Radiology
- Shandong Provincial Key Medical and Health Laboratory of Intelligent Diagnosis and Treatment for Women's Diseases (Yantai Yuhuangding Hospital), Yantai, Shandong, P. R. China
| |
Collapse
|
55
|
Abd-Ellah MK, Awad AI, Khalaf AAM, Ibraheem AM. Automatic brain-tumor diagnosis using cascaded deep convolutional neural networks with symmetric U-Net and asymmetric residual-blocks. Sci Rep 2024; 14:9501. [PMID: 38664436 PMCID: PMC11045751 DOI: 10.1038/s41598-024-59566-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 04/12/2024] [Indexed: 04/28/2024] Open
Abstract
The use of various kinds of magnetic resonance imaging (MRI) techniques for examining brain tissue has increased significantly in recent years, and manual investigation of each of the resulting images can be a time-consuming task. This paper presents an automatic brain-tumor diagnosis system that uses a CNN for detection, classification, and segmentation of glioblastomas; the latter stage seeks to segment tumors inside glioma MRI images. The structure of the developed multi-unit system consists of two stages. The first stage is responsible for tumor detection and classification by categorizing brain MRI images into normal, high-grade glioma (glioblastoma), and low-grade glioma. The uniqueness of the proposed network lies in its use of different levels of features, including local and global paths. The second stage is responsible for tumor segmentation, and skip connections and residual units are used during this step. Using 1800 images extracted from the BraTS 2017 dataset, the detection and classification stage was found to achieve a maximum accuracy of 99%. The segmentation stage was then evaluated using the Dice score, specificity, and sensitivity. The results showed that the suggested deep-learning-based system ranks highest among a variety of different strategies reported in the literature.
Collapse
Affiliation(s)
| | - Ali Ismail Awad
- College of Information Technology, United Arab Emirates University, P.O. Box 15551, Al Ain, United Arab Emirates.
- Faculty of Engineering, Al-Azhar University, P.O. Box 83513, Qena, Egypt.
| | - Ashraf A M Khalaf
- Department of Electrical Engineering, Faculty of Engineering, Minia University, Minia, 61519, Egypt
| | - Amira Mofreh Ibraheem
- Faculty of Artificial Intelligence, Egyptian Russian University, Cairo, 11829, Egypt
| |
Collapse
|
56
|
Dehghani F, Karimian A, Arabi H. Joint Brain Tumor Segmentation from Multi-magnetic Resonance Sequences through a Deep Convolutional Neural Network. JOURNAL OF MEDICAL SIGNALS & SENSORS 2024; 14:9. [PMID: 38993203 PMCID: PMC11111160 DOI: 10.4103/jmss.jmss_13_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 07/07/2023] [Accepted: 07/31/2023] [Indexed: 07/13/2024]
Abstract
Background Brain tumor segmentation is highly contributive in diagnosing and treatment planning. Manual brain tumor delineation is a time-consuming and tedious task and varies depending on the radiologist's skill. Automated brain tumor segmentation is of high importance and does not depend on either inter- or intra-observation. The objective of this study is to automate the delineation of brain tumors from the Fluid-attenuated inversion recovery (FLAIR), T1-weighted (T1W), T2-weighted (T2W), and T1W contrast-enhanced (T1ce) magnetic resonance (MR) sequences through a deep learning approach, with a focus on determining which MR sequence alone or which combination thereof would lead to the highest accuracy therein. Methods The BraTS-2020 challenge dataset, containing 370 subjects with four MR sequences and manually delineated tumor masks, is applied to train a residual neural network. This network is trained and assessed separately for each one of the MR sequences (single-channel input) and any combination thereof (dual- or multi-channel input). Results The quantitative assessment of the single-channel models reveals that the FLAIR sequence would yield higher segmentation accuracy compared to its counterparts with a 0.77 ± 0.10 Dice index. As to considering the dual-channel models, the model with FLAIR and T2W inputs yields a 0.80 ± 0.10 Dice index, exhibiting higher performance. The joint tumor segmentation on the entire four MR sequences yields the highest overall segmentation accuracy with a 0.82 ± 0.09 Dice index. Conclusion The FLAIR MR sequence is considered the best choice for tumor segmentation on a single MR sequence, while the joint segmentation on the entire four MR sequences would yield higher tumor delineation accuracy.
Collapse
Affiliation(s)
- Farzaneh Dehghani
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Alireza Karimian
- Department of Biomedical Engineering, Faculty of Engineering, University of Isfahan, Isfahan, Iran
| | - Hossein Arabi
- Department of Medical Imaging, Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva 4, Switzerland
| |
Collapse
|
57
|
Asiri AA, Shaf A, Ali T, Aamir M, Irfan M, Alqahtani S. Enhancing brain tumor diagnosis: an optimized CNN hyperparameter model for improved accuracy and reliability. PeerJ Comput Sci 2024; 10:e1878. [PMID: 38660148 PMCID: PMC11041936 DOI: 10.7717/peerj-cs.1878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Accepted: 01/24/2024] [Indexed: 04/26/2024]
Abstract
Hyperparameter tuning plays a pivotal role in the accuracy and reliability of convolutional neural network (CNN) models used in brain tumor diagnosis. These hyperparameters exert control over various aspects of the neural network, encompassing feature extraction, spatial resolution, non-linear mapping, convergence speed, and model complexity. We propose a meticulously refined CNN hyperparameter model designed to optimize critical parameters, including filter number and size, stride padding, pooling techniques, activation functions, learning rate, batch size, and the number of layers. Our approach leverages two publicly available brain tumor MRI datasets for research purposes. The first dataset comprises a total of 7,023 human brain images, categorized into four classes: glioma, meningioma, no tumor, and pituitary. The second dataset contains 253 images classified as "yes" and "no." Our approach delivers exceptional results, demonstrating an average 94.25% precision, recall, and F1-score with 96% accuracy for dataset 1, while an average 87.5% precision, recall, and F1-score, with accuracy of 88% for dataset 2. To affirm the robustness of our findings, we perform a comprehensive comparison with existing techniques, revealing that our method consistently outperforms these approaches. By systematically fine-tuning these critical hyperparameters, our model not only enhances its performance but also bolsters its generalization capabilities. This optimized CNN model provides medical experts with a more precise and efficient tool for supporting their decision-making processes in brain tumor diagnosis.
Collapse
Affiliation(s)
- Abdullah A. Asiri
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Najran, Saudi Arabia
| | - Ahmad Shaf
- Department of Computer Science, COMSATS University Islamabad, Sahiwal, Punjan, Pakistan
| | - Tariq Ali
- Department of Computer Science, COMSATS University Islamabad, Sahiwal, Punjan, Pakistan
| | - Muhammad Aamir
- Department of Computer Science, COMSATS University Islamabad, Sahiwal, Punjan, Pakistan
| | - Muhammad Irfan
- Electrical Engineering Department, College of Engineering, Najran University, Najran, Najran, Saudi Arabia
| | - Saeed Alqahtani
- Radiological Sciences Department, College of Applied Medical Sciences, Najran University, Najran, Najran, Saudi Arabia
| |
Collapse
|
58
|
Fan H, Luo Y, Gu F, Tian B, Xiong Y, Wu G, Nie X, Yu J, Tong J, Liao X. Artificial intelligence-based MRI radiomics and radiogenomics in glioma. Cancer Imaging 2024; 24:36. [PMID: 38486342 PMCID: PMC10938723 DOI: 10.1186/s40644-024-00682-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Accepted: 03/03/2024] [Indexed: 03/18/2024] Open
Abstract
The specific genetic subtypes that gliomas exhibit result in variable clinical courses and the need to involve multidisciplinary teams of neurologists, epileptologists, neurooncologists and neurosurgeons. Currently, the diagnosis of gliomas pivots mainly around the preliminary radiological findings and the subsequent definitive surgical diagnosis (via surgical sampling). Radiomics and radiogenomics present a potential to precisely diagnose and predict survival and treatment responses, via morphological, textural, and functional features derived from MRI data, as well as genomic data. In spite of their advantages, it is still lacking standardized processes of feature extraction and analysis methodology among different research groups, which have made external validations infeasible. Radiomics and radiogenomics can be used to better understand the genomic basis of gliomas, such as tumor spatial heterogeneity, treatment response, molecular classifications and tumor microenvironment immune infiltration. These novel techniques have also been used to predict histological features, grade or even overall survival in gliomas. In this review, workflows of radiomics and radiogenomics are elucidated, with recent research on machine learning or artificial intelligence in glioma.
Collapse
Affiliation(s)
- Haiqing Fan
- Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, 550000, Guizhou, Guiyang, China
| | - Yilin Luo
- Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, 550000, Guizhou, Guiyang, China
| | - Fang Gu
- Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, 550000, Guizhou, Guiyang, China
| | - Bin Tian
- Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, 550000, Guizhou, Guiyang, China
| | - Yongqin Xiong
- Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, 550000, Guizhou, Guiyang, China
| | - Guipeng Wu
- Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, 550000, Guizhou, Guiyang, China
| | - Xin Nie
- Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, 550000, Guizhou, Guiyang, China
| | - Jing Yu
- Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, 550000, Guizhou, Guiyang, China
| | - Juan Tong
- Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, 550000, Guizhou, Guiyang, China
| | - Xin Liao
- Department of Medical Imaging, The Affiliated Hospital of Guizhou Medical University, 550000, Guizhou, Guiyang, China.
| |
Collapse
|
59
|
Hamghalam M, Simpson AL. Medical image synthesis via conditional GANs: Application to segmenting brain tumours. Comput Biol Med 2024; 170:107982. [PMID: 38266466 DOI: 10.1016/j.compbiomed.2024.107982] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 12/30/2023] [Accepted: 01/13/2024] [Indexed: 01/26/2024]
Abstract
Accurate brain tumour segmentation is critical for tasks such as surgical planning, diagnosis, and analysis, with magnetic resonance imaging (MRI) being the preferred modality due to its excellent visualisation of brain tissues. However, the wide intensity range of voxel values in MR scans often results in significant overlap between the density distributions of different tumour tissues, leading to reduced contrast and segmentation accuracy. This paper introduces a novel framework based on conditional generative adversarial networks (cGANs) aimed at enhancing the contrast of tumour subregions for both voxel-wise and region-wise segmentation approaches. We present two models: Enhancement and Segmentation GAN (ESGAN), which combines classifier loss with adversarial loss to predict central labels of input patches, and Enhancement GAN (EnhGAN), which generates high-contrast synthetic images with reduced inter-class overlap. These synthetic images are then fused with corresponding modalities to emphasise meaningful tissues while suppressing weaker ones. We also introduce a novel generator that adaptively calibrates voxel values within input patches, leveraging fully convolutional networks. Both models employ a multi-scale Markovian network as a GAN discriminator to capture local patch statistics and estimate the distribution of MR images in complex contexts. Experimental results on publicly available MR brain tumour datasets demonstrate the competitive accuracy of our models compared to current brain tumour segmentation techniques.
Collapse
Affiliation(s)
- Mohammad Hamghalam
- School of Computing, Queen's University, Kingston, ON, Canada; Department of Electrical Engineering, Qazvin Branch, Islamic Azad University, Qazvin, Iran.
| | - Amber L Simpson
- School of Computing, Queen's University, Kingston, ON, Canada; Department of Biomedical and Molecular Sciences, Queen's University, Kingston, ON, Canada.
| |
Collapse
|
60
|
Sierra-Jerez F, Martinez F. A non-aligned translation with a neoplastic classifier regularization to include vascular NBI patterns in standard colonoscopies. Comput Biol Med 2024; 170:108008. [PMID: 38277922 DOI: 10.1016/j.compbiomed.2024.108008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 12/21/2023] [Accepted: 01/13/2024] [Indexed: 01/28/2024]
Abstract
Polyp vascular patterns are key to categorizing colorectal cancer malignancy. These patterns are typically observed in situ from specialized narrow-band images (NBI). Nonetheless, such vascular characterization is lost from standard colonoscopies (the primary attention mechanism). Besides, even for NBI observations, the categorization remains biased for expert observations, reporting errors in classification from 59.5% to 84.2%. This work introduces an end-to-end computational strategy to enhance in situ standard colonoscopy observations, including vascular patterns typically observed from NBI mechanisms. These retrieved synthetic images are achieved by adjusting a deep representation under a non-aligned translation task from optical colonoscopy (OC) to NBI. The introduced scheme includes an architecture to discriminate enhanced neoplastic patterns achieving a remarkable separation into the embedding representation. The proposed approach was validated in a public dataset with a total of 76 sequences, including standard optical sequences and the respective NBI observations. The enhanced optical sequences were automatically classified among adenomas and hyperplastic samples achieving an F1-score of 0.86%. To measure the sensibility capability of the proposed approach, serrated samples were projected to the trained architecture. In this experiment, statistical differences from three classes with a ρ-value <0.05 were reported, following a Mann-Whitney U test. This work showed remarkable polyp discrimination results in enhancing OC sequences regarding typical NBI patterns. This method also learns polyp class distributions under the unpaired criteria (close to real practice), with the capability to separate serrated samples from adenomas and hyperplastic ones.
Collapse
Affiliation(s)
- Franklin Sierra-Jerez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia
| | - Fabio Martinez
- Biomedical Imaging, Vision and Learning Laboratory (BIVL(2)ab), Universidad Industrial de Santander (UIS), Colombia.
| |
Collapse
|
61
|
Tandon R, Agrawal S, Rathore NPS, Mishra AK, Jain SK. A systematic review on deep learning-based automated cancer diagnosis models. J Cell Mol Med 2024; 28:e18144. [PMID: 38426930 PMCID: PMC10906380 DOI: 10.1111/jcmm.18144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 12/08/2023] [Accepted: 01/16/2024] [Indexed: 03/02/2024] Open
Abstract
Deep learning is gaining importance due to its wide range of applications. Many researchers have utilized deep learning (DL) models for the automated diagnosis of cancer patients. This paper provides a systematic review of DL models for automated diagnosis of cancer patients. Initially, various DL models for cancer diagnosis are presented. Five major categories of cancers such as breast, lung, liver, brain and cervical cancer are considered. As these categories of cancers have a very high percentage of occurrences with high mortality rate. The comparative analysis of different types of DL models is drawn for the diagnosis of cancer at early stages by considering the latest research articles from 2016 to 2022. After comprehensive comparative analysis, it is found that most of the researchers achieved appreciable accuracy with implementation of the convolutional neural network model. These utilized the pretrained models for automated diagnosis of cancer patients. Various shortcomings with the existing DL-based automated cancer diagnosis models are also been presented. Finally, future directions are discussed to facilitate further research for automated diagnosis of cancer patients.
Collapse
Affiliation(s)
| | | | | | - Abhinava K. Mishra
- Molecular, Cellular and Developmental Biology DepartmentUniversity of California Santa BarbaraSanta BarbaraCaliforniaUSA
| | | |
Collapse
|
62
|
Dheepak G, Anita Christaline J, Vaishali D. MEHW‐SVM multi‐kernel approach for improved brain tumour classification. IET IMAGE PROCESSING 2024; 18:856-874. [DOI: 10.1049/ipr2.12990] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 11/06/2023] [Indexed: 11/01/2024]
Abstract
AbstractThe human brain, the primary constituent of the nervous system, exhibits distinctive complexities that present considerable difficulties for healthcare practitioners, specifically in categorizing brain tumours. Magnetic resonance imaging is a widely favoured imaging modality for detecting brain tumours due to its extensive range of image characteristics and utilization of non‐ionizing radiation. The primary objective of the current investigation is to differentiate between three distinct classifications of brain tumours by introducing a novel methodology. The utilization of a combined feature extraction technique that integrates novel global grey level co‐occurrence matrix and local binary patterns is employed, thereby offering a comprehensive representation of the structural and textural information contained within the images. Principal component analysis is used to improve the model's efficiency for effective feature selection and dimensionality reduction. This study presents a novel framework incorporating four separate kernel functions, Minkowski–Gaussian, exponential support vector machine (SVM), histogram intersection SVM, and wavelet kernel, into a SVM classifier. The ensemble kernel employed in this study is specifically designed to classify glioma, meningioma, and pituitary tumours. Its implementation enhances the model's robustness and adaptability, surpassing the performance of conventional single‐kernel SVM approaches. This study substantially contributes to medical image classification by utilizing innovative kernel functions and advanced machine‐learning techniques. The findings demonstrate the potential for enhanced diagnostic accuracy in brain tumour cases. The presented approach shows promise in effectively addressing the intricate challenges associated with classifying brain tumours.
Collapse
Affiliation(s)
- G. Dheepak
- Department of Electronics and Communication Engineering, Faculty of Engineering and Technology SRM Institute of Science and Technology, Vadapalani Campus Chennai Tamil Nadu India
| | - J. Anita Christaline
- Department of Electronics and Communication Engineering, Faculty of Engineering and Technology SRM Institute of Science and Technology, Vadapalani Campus Chennai Tamil Nadu India
| | - D. Vaishali
- Department of Electronics and Communication Engineering, Faculty of Engineering and Technology SRM Institute of Science and Technology, Vadapalani Campus Chennai Tamil Nadu India
| |
Collapse
|
63
|
Liu H, Huang J, Li Q, Guan X, Tseng M. A deep convolutional neural network for the automatic segmentation of glioblastoma brain tumor: Joint spatial pyramid module and attention mechanism network. Artif Intell Med 2024; 148:102776. [PMID: 38325925 DOI: 10.1016/j.artmed.2024.102776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 12/20/2023] [Accepted: 01/14/2024] [Indexed: 02/09/2024]
Abstract
This study proposes a deep convolutional neural network for the automatic segmentation of glioblastoma brain tumors, aiming sat replacing the manual segmentation method that is both time-consuming and labor-intensive. There are many challenges for automatic segmentation to finely segment sub-regions from multi-sequence magnetic resonance images because of the complexity and variability of glioblastomas, such as the loss of boundary information, misclassified regions, and subregion size. To overcome these challenges, this study introduces a spatial pyramid module and attention mechanism to the automatic segmentation algorithm, which focuses on multi-scale spatial details and context information. The proposed method has been tested in the public benchmarks BraTS 2018, BraTS 2019, BraTS 2020 and BraTS 2021 datasets. The Dice score on the enhanced tumor, whole tumor, and tumor core were respectively 79.90 %, 89.63 %, and 85.89 % on the BraTS 2018 dataset, respectively 77.14 %, 89.58 %, and 83.33 % on the BraTS 2019 dataset, and respectively 77.80 %, 90.04 %, and 83.18 % on the BraTS 2020 dataset, and respectively 83.48 %, 90.70 %, and 88.94 % on the BraTS 2021 dataset offering performance on par with that of state-of-the-art methods with only 1.90 M parameters. In addition, our approach significantly reduced the requirements for experimental equipment, and the average time taken to segment one case was only 1.48 s; these two benefits rendered the proposed network intensely competitive for clinical practice.
Collapse
Affiliation(s)
- Hengxin Liu
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Jingteng Huang
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Qiang Li
- School of Microelectronics, Tianjin University, Tianjin, China
| | - Xin Guan
- School of Microelectronics, Tianjin University, Tianjin, China.
| | - Minglang Tseng
- Institute of Innovation and Circular Economy, Asia University, Taichung, Taiwan; Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan; UKM-Graduate School of Business, Universiti Kebangsaan Malaysia, 43000 Bangi, Selangor, Malaysia; Department of Industrial Engineering, Khon Kaen University, 40002, Thailand.
| |
Collapse
|
64
|
Cao Y, Feng J, Wang C, Yang F, Wang X, Xu J, Huang C, Zhang S, Li Z, Mao L, Zhang T, Jia B, Li T, Li H, Zhang B, Shi H, Li D, Zhang N, Yu Y, Meng X, Zhang Z. LNAS: a clinically applicable deep-learning system for mediastinal enlarged lymph nodes segmentation and station mapping without regard to the pathogenesis using unenhanced CT images. LA RADIOLOGIA MEDICA 2024; 129:229-238. [PMID: 38108979 DOI: 10.1007/s11547-023-01747-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 10/20/2023] [Indexed: 12/19/2023]
Abstract
BACKGROUND The accurate identification and evaluation of lymph nodes by CT images is of great significance for disease diagnosis, treatment, and prognosis. PURPOSE To assess the lymph nodes' segmentation, size, and station by artificial intelligence (AI) for unenhanced chest CT images and evaluate its value in clinical scenarios. MATERIAL AND METHODS This retrospective study proposed an end-to-end Lymph Nodes Analysis System (LNAS) consisting of three models: the Lymph Node Segmentation model (LNS), the Mediastinal Organ Segmentation model (MOS), and the Lymph Node Station Registration model (LNR). We selected a healthy chest CT image as the template image and annotated 14 lymph node station masks according to the IASLC to build the lymph node station mapping template. The exact contours and stations of the lymph nodes were annotated by two junior radiologists and reviewed by a senior radiologist. Patients aged 18 and above, who had undergone unenhanced chest CT and had at least one suspicious enlarged mediastinal lymph node in imaging reports, were included. Exclusions were patients who had thoracic surgeries in the past 2 weeks or artifacts on CT images affecting lymph node observation by radiologists. The system was trained on 6725 consecutive chest CTs that from Tianjin Medical University General Hospital, among which 6249 patients had suspicious enlarged mediastinal lymph nodes. A total of 519 consecutive chest CTs from Qilu Hospital of Shandong University (Qingdao) were used for external validation. The gold standard for each CT was determined by two radiologists and reviewed by one senior radiologist. RESULTS The patient-level sensitivity of the LNAS system reached of 93.94% and 92.89% in internal and external test dataset, respectively. And the lesion-level sensitivity (recall) reached 89.48% and 85.97% in internal and external test dataset. For man-machine comparison, AI significantly apparently shortened the average reading time (p < 0.001) and had better lesion-level and patient-level sensitivities. CONCLUSION AI improved the sensitivity lymph node segmentation by radiologists with an advantage in reading time.
Collapse
Affiliation(s)
- Yang Cao
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Jintang Feng
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
- Department of Radiology, Tianjin Chest Hospital, Tianjin, China
| | | | - Fan Yang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Xiaomeng Wang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | | | | | | | | | - Li Mao
- Deepwise AI Lab, Beijing, China
| | - Tianzhu Zhang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Bingzhen Jia
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Tongli Li
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Hui Li
- Department of Radiology, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Bingjin Zhang
- Department of Radiology, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Hongmei Shi
- Department of Radiology, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Dong Li
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Ningnannan Zhang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Yizhou Yu
- Deepwise AI Lab, Beijing, China
- Department of Computer Science, The University of Hong Kong, Hong Kong, China
| | - Xiangshui Meng
- Department of Radiology, Qilu Hospital (Qingdao), Cheeloo College of Medicine, Shandong University, Jinan, Shandong, China
| | - Zhang Zhang
- Department of Radiology, Tianjin Medical University General Hospital, Tianjin, 300052, China.
| |
Collapse
|
65
|
Wu X, Yang X, Li Z, Liu L, Xia Y. Multimodal brain tumor image segmentation based on DenseNet. PLoS One 2024; 19:e0286125. [PMID: 38236898 PMCID: PMC10796062 DOI: 10.1371/journal.pone.0286125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Accepted: 05/09/2023] [Indexed: 01/22/2024] Open
Abstract
A brain tumor magnetic resonance image processing algorithm can help doctors to diagnose and treat the patient's condition, which has important application significance in clinical medicine. This paper proposes a network model based on the combination of U-net and DenseNet to solve the problems of class imbalance in multi-modal brain tumor image segmentation and the loss of effective information features caused by the integration of features in the traditional U-net network. The standard convolution blocks of the coding path and decoding path on the original network are improved to dense blocks, which enhances the transmission of features. The mixed loss function composed of the Binary Cross Entropy Loss function and the Tversky coefficient is used to replace the original single cross-entropy loss, which restrains the influence of irrelevant features on segmentation accuracy. Compared with U-Net, U-Net++, and PA-Net the algorithm in this paper has significantly improved the segmentation accuracy, reaching 0.846, 0.861, and 0.782 respectively in the Dice coefficient index of WT, TC, and ET. The PPV coefficient index has reached 0.849, 0.883, and 0.786 respectively. Compared with the traditional U-net network, the Dice coefficient index of the proposed algorithm exceeds 0.8%, 4.0%, and 1.4%, respectively, and the PPV coefficient index in the tumor core area and tumor enhancement area increases by 3% and 1.2% respectively. The proposed algorithm has the best performance in tumor core area segmentation, and its Sensitivity index has reached 0.924, which has good research significance and application value.
Collapse
Affiliation(s)
- Xiaoqin Wu
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, P. R. China
| | - Xiaoli Yang
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, P. R. China
| | - Zhenwei Li
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, P. R. China
| | - Lipei Liu
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, P. R. China
| | - Yuxin Xia
- School of Medical Technology and Engineering, Henan University of Science and Technology, Luoyang, P. R. China
| |
Collapse
|
66
|
Sulaiman A, Anand V, Gupta S, Al Reshan MS, Alshahrani H, Shaikh A, Elmagzoub MA. An intelligent LinkNet-34 model with EfficientNetB7 encoder for semantic segmentation of brain tumor. Sci Rep 2024; 14:1345. [PMID: 38228639 PMCID: PMC10792164 DOI: 10.1038/s41598-024-51472-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 01/05/2024] [Indexed: 01/18/2024] Open
Abstract
A brain tumor is an unnatural expansion of brain cells that can't be stopped, making it one of the deadliest diseases of the nervous system. The brain tumor segmentation for its earlier diagnosis is a difficult task in the field of medical image analysis. Earlier, segmenting brain tumors was done manually by radiologists but that requires a lot of time and effort. Inspite of this, in the manual segmentation there was possibility of making mistakes due to human intervention. It has been proved that deep learning models can outperform human experts for the diagnosis of brain tumor in MRI images. These algorithms employ a huge number of MRI scans to learn the difficult patterns of brain tumors to segment them automatically and accurately. Here, an encoder-decoder based architecture with deep convolutional neural network is proposed for semantic segmentation of brain tumor in MRI images. The proposed method focuses on the image downsampling in the encoder part. For this, an intelligent LinkNet-34 model with EfficientNetB7 encoder based semantic segmentation model is proposed. The performance of LinkNet-34 model is compared with other three models namely FPN, U-Net, and PSPNet. Further, the performance of EfficientNetB7 used as encoder in LinkNet-34 model has been compared with three encoders namely ResNet34, MobileNet_V2, and ResNet50. After that, the proposed model is optimized using three different optimizers such as RMSProp, Adamax and Adam. The LinkNet-34 model has outperformed with EfficientNetB7 encoder using Adamax optimizer with the value of jaccard index as 0.89 and dice coefficient as 0.915.
Collapse
Affiliation(s)
- Adel Sulaiman
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Vatsala Anand
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India
| | - Mana Saleh Al Reshan
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Hani Alshahrani
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - M A Elmagzoub
- Department of Network and Communication Engineering, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| |
Collapse
|
67
|
Wang P, Liu Y, Zhou Z. Supraspinatus extraction from MRI based on attention-dense spatial pyramid UNet network. J Orthop Surg Res 2024; 19:60. [PMID: 38216968 PMCID: PMC10787409 DOI: 10.1186/s13018-023-04509-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 12/23/2023] [Indexed: 01/14/2024] Open
Abstract
BACKGROUND With potential of deep learning in musculoskeletal image interpretation being explored, this paper focuses on the common site of rotator cuff tears, the supraspinatus. It aims to propose and validate a deep learning model to automatically extract the supraspinatus, verifying its superiority through comparison with several classical image segmentation models. METHOD Imaging data were retrospectively collected from 60 patients who underwent inpatient treatment for rotator cuff tears at a hospital between March 2021 and May 2023. A dataset of the supraspinatus from MRI was constructed after collecting, filtering, and manually annotating at the pixel level. This paper proposes a novel A-DAsppUnet network that can automatically extract the supraspinatus after training and optimization. The analysis of model performance is based on three evaluation metrics: precision, intersection over union, and Dice coefficient. RESULTS The experimental results demonstrate that the precision, intersection over union, and Dice coefficients of the proposed model are 99.20%, 83.38%, and 90.94%, respectively. Furthermore, the proposed model exhibited significant advantages over the compared models. CONCLUSION The designed model in this paper accurately extracts the supraspinatus from MRI, and the extraction results are complete and continuous with clear boundaries. The feasibility of using deep learning methods for musculoskeletal extraction and assisting in clinical decision-making was verified. This research holds practical significance and application value in the field of utilizing artificial intelligence for assisting medical decision-making.
Collapse
Affiliation(s)
- Peng Wang
- Third Clinical Medical School, Nanjing University of Chinese Medicine, Nanjing, 210023, People's Republic of China
- Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, No. 100 Maigaoqiao Cross Street, Qixia District, Nanjing City, 210028, Jiangsu Province, People's Republic of China
| | - Yang Liu
- School of Remote Sensing and Geomatics Engineering, Nanjing University of Information Science & Technology, Nanjing, 210044, People's Republic of China
| | - Zhong Zhou
- Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine, No. 100 Maigaoqiao Cross Street, Qixia District, Nanjing City, 210028, Jiangsu Province, People's Republic of China.
| |
Collapse
|
68
|
Sharif M, Tanvir U, Munir EU, Khan MA, Yasmin M. Brain tumor segmentation and classification by improved binomial thresholding and multi-features selection. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2024; 15:1063-1082. [DOI: 10.1007/s12652-018-1075-x] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Accepted: 09/27/2018] [Indexed: 08/25/2024]
|
69
|
Herr J, Stoyanova R, Mellon EA. Convolutional Neural Networks for Glioma Segmentation and Prognosis: A Systematic Review. Crit Rev Oncog 2024; 29:33-65. [PMID: 38683153 DOI: 10.1615/critrevoncog.2023050852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/01/2024]
Abstract
Deep learning (DL) is poised to redefine the way medical images are processed and analyzed. Convolutional neural networks (CNNs), a specific type of DL architecture, are exceptional for high-throughput processing, allowing for the effective extraction of relevant diagnostic patterns from large volumes of complex visual data. This technology has garnered substantial interest in the field of neuro-oncology as a promising tool to enhance medical imaging throughput and analysis. A multitude of methods harnessing MRI-based CNNs have been proposed for brain tumor segmentation, classification, and prognosis prediction. They are often applied to gliomas, the most common primary brain cancer, to classify subtypes with the goal of guiding therapy decisions. Additionally, the difficulty of repeating brain biopsies to evaluate treatment response in the setting of often confusing imaging findings provides a unique niche for CNNs to help distinguish the treatment response to gliomas. For example, glioblastoma, the most aggressive type of brain cancer, can grow due to poor treatment response, can appear to grow acutely due to treatment-related inflammation as the tumor dies (pseudo-progression), or falsely appear to be regrowing after treatment as a result of brain damage from radiation (radiation necrosis). CNNs are being applied to separate this diagnostic dilemma. This review provides a detailed synthesis of recent DL methods and applications for intratumor segmentation, glioma classification, and prognosis prediction. Furthermore, this review discusses the future direction of MRI-based CNN in the field of neuro-oncology and challenges in model interpretability, data availability, and computation efficiency.
Collapse
Affiliation(s)
| | - Radka Stoyanova
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| | - Eric Albert Mellon
- Department of Radiation Oncology, University of Miami Miller School of Medicine, Sylvester Comprehensive Cancer Center, Miami, Fl 33136, USA
| |
Collapse
|
70
|
Sabater-Gárriz Á, Gaya-Morey FX, Buades-Rubio JM, Manresa-Yee C, Montoya P, Riquelme I. Automated facial recognition system using deep learning for pain assessment in adults with cerebral palsy. Digit Health 2024; 10:20552076241259664. [PMID: 38846372 PMCID: PMC11155325 DOI: 10.1177/20552076241259664] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 05/07/2024] [Indexed: 06/09/2024] Open
Abstract
Objective Assessing pain in individuals with neurological conditions like cerebral palsy is challenging due to limited self-reporting and expression abilities. Current methods lack sensitivity and specificity, underlining the need for a reliable evaluation protocol. An automated facial recognition system could revolutionize pain assessment for such patients.The research focuses on two primary goals: developing a dataset of facial pain expressions for individuals with cerebral palsy and creating a deep learning-based automated system for pain assessment tailored to this group. Methods The study trained ten neural networks using three pain image databases and a newly curated CP-PAIN Dataset of 109 images from cerebral palsy patients, classified by experts using the Facial Action Coding System. Results The InceptionV3 model demonstrated promising results, achieving 62.67% accuracy and a 61.12% F1 score on the CP-PAIN dataset. Explainable AI techniques confirmed the consistency of crucial features for pain identification across models. Conclusion The study underscores the potential of deep learning in developing reliable pain detection systems using facial recognition for individuals with communication impairments due to neurological conditions. A more extensive and diverse dataset could further enhance the models' sensitivity to subtle pain expressions in cerebral palsy patients and possibly extend to other complex neurological disorders. This research marks a significant step toward more empathetic and accurate pain management for vulnerable populations.
Collapse
Affiliation(s)
- Álvaro Sabater-Gárriz
- Department of Research and Training, Balearic ASPACE Foundation, Marratxí, Spain
- Department of Nursing and Physiotherapy, University of the Balearic Islands, Palma de Mallorca, Spain
- Research Institute on Health Sciences (IUNICS), University of the Balearic Islands, Palma de Mallorca, Spain
- Health Research Institute of the Balearic Islands (IdISBa), Palma de Mallorca, Spain
| | - F Xavier Gaya-Morey
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
| | - José María Buades-Rubio
- Research Institute on Health Sciences (IUNICS), University of the Balearic Islands, Palma de Mallorca, Spain
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
| | - Cristina Manresa-Yee
- Research Institute on Health Sciences (IUNICS), University of the Balearic Islands, Palma de Mallorca, Spain
- Department of Mathematics and Computer Science, University of the Balearic Islands, Palma de Mallorca, Spain
| | - Pedro Montoya
- Research Institute on Health Sciences (IUNICS), University of the Balearic Islands, Palma de Mallorca, Spain
- Health Research Institute of the Balearic Islands (IdISBa), Palma de Mallorca, Spain
- Center for Mathematics, Computation and Cognition, Federal University of ABC, São Bernardo do Campo, Brazil
| | - Inmaculada Riquelme
- Department of Nursing and Physiotherapy, University of the Balearic Islands, Palma de Mallorca, Spain
- Research Institute on Health Sciences (IUNICS), University of the Balearic Islands, Palma de Mallorca, Spain
- Health Research Institute of the Balearic Islands (IdISBa), Palma de Mallorca, Spain
| |
Collapse
|
71
|
Chen X, Liu X, Wu Y, Wang Z, Wang SH. Research related to the diagnosis of prostate cancer based on machine learning medical images: A review. Int J Med Inform 2024; 181:105279. [PMID: 37977054 DOI: 10.1016/j.ijmedinf.2023.105279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/06/2023] [Accepted: 10/29/2023] [Indexed: 11/19/2023]
Abstract
BACKGROUND Prostate cancer is currently the second most prevalent cancer among men. Accurate diagnosis of prostate cancer can provide effective treatment for patients and greatly reduce mortality. The current medical imaging tools for screening prostate cancer are mainly MRI, CT and ultrasound. In the past 20 years, these medical imaging methods have made great progress with machine learning, especially the rise of deep learning has led to a wider application of artificial intelligence in the use of image-assisted diagnosis of prostate cancer. METHOD This review collected medical image processing methods, prostate and prostate cancer on MR images, CT images, and ultrasound images through search engines such as web of science, PubMed, and Google Scholar, including image pre-processing methods, segmentation of prostate gland on medical images, registration between prostate gland on different modal images, detection of prostate cancer lesions on the prostate. CONCLUSION Through these collated papers, it is found that the current research on the diagnosis and staging of prostate cancer using machine learning and deep learning is in its infancy, and most of the existing studies are on the diagnosis of prostate cancer and classification of lesions, and the accuracy is low, with the best results having an accuracy of less than 0.95. There are fewer studies on staging. The research is mainly focused on MR images and much less on CT images, ultrasound images. DISCUSSION Machine learning and deep learning combined with medical imaging have a broad application prospect for the diagnosis and staging of prostate cancer, but the research in this area still has more room for development.
Collapse
Affiliation(s)
- Xinyi Chen
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Xiang Liu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Yuke Wu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201620, China.
| | - Zhenglei Wang
- Department of Medical Imaging, Shanghai Electric Power Hospital, Shanghai 201620, China.
| | - Shuo Hong Wang
- Department of Molecular and Cellular Biology and Center for Brain Science, Harvard University, Cambridge, MA 02138, USA.
| |
Collapse
|
72
|
Dai J, Liu T, Torigian DA, Tong Y, Han S, Nie P, Zhang J, Li R, Xie F, Udupa JK. GA-Net: A geographical attention neural network for the segmentation of body torso tissue composition. Med Image Anal 2024; 91:102987. [PMID: 37837691 PMCID: PMC10841506 DOI: 10.1016/j.media.2023.102987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 07/27/2023] [Accepted: 09/28/2023] [Indexed: 10/16/2023]
Abstract
PURPOSE Body composition analysis (BCA) of the body torso plays a vital role in the study of physical health and pathology and provides biomarkers that facilitate the diagnosis and treatment of many diseases, such as type 2 diabetes mellitus, cardiovascular disease, obstructive sleep apnea, and osteoarthritis. In this work, we propose a body composition tissue segmentation method that can automatically delineate those key tissues, including subcutaneous adipose tissue, skeleton, skeletal muscle tissue, and visceral adipose tissue, on positron emission tomography/computed tomography scans of the body torso. METHODS To provide appropriate and precise semantic and spatial information that is strongly related to body composition tissues for the deep neural network, first we introduce a new concept of the body area and integrate it into our proposed segmentation network called Geographical Attention Network (GA-Net). The body areas are defined following anatomical principles such that the whole body torso region is partitioned into three non-overlapping body areas. Each body composition tissue of interest is fully contained in exactly one specific minimal body area. Secondly, the proposed GA-Net has a novel dual-decoder schema that is composed of a tissue decoder and an area decoder. The tissue decoder segments the body composition tissues, while the area decoder segments the body areas as an auxiliary task. The features of body areas and body composition tissues are fused through a soft attention mechanism to gain geographical attention relevant to the body tissues. Thirdly, we propose a body composition tissue annotation approach that takes the body area labels as the region of interest, which significantly improves the reproducibility, precision, and efficiency of delineating body composition tissues. RESULTS Our evaluations on 50 low-dose unenhanced CT images indicate that GA-Net outperforms other architectures statistically significantly based on the Dice metric. GA-Net also shows improvements for the 95% Hausdorff Distance metric in most comparisons. Notably, GA-Net exhibits more sensitivity to subtle boundary information and produces more reliable and robust predictions for such structures, which are the most challenging parts to manually mend in practice, with potentially significant time-savings in the post hoc correction of these subtle boundary placement errors. Due to the prior knowledge provided from body areas, GA-Net achieves competitive performance with less training data. Our extension of the dual-decoder schema to TransUNet and 3D U-Net demonstrates that the new schema significantly improves the performance of these classical neural networks as well. Heatmaps obtained from attention gate layers further illustrate the geographical guidance function of body areas for identifying body tissues. CONCLUSIONS (i) Prior anatomic knowledge supplied in the form of appropriately designed anatomic container objects significantly improves the segmentation of bodily tissues. (ii) Of particular note are the improvements achieved in the delineation of subtle boundary features which otherwise would take much effort for manual correction. (iii) The method can be easily extended to existing networks to improve their accuracy for this application.
Collapse
Affiliation(s)
- Jian Dai
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| | - Shiwei Han
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Pengju Nie
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Jing Zhang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Ran Li
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Fei Xie
- School of AOAIR, Xidian University, Xi'an 710071, Shaanxi, China.
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| |
Collapse
|
73
|
Ma T, Wang K, Hu F. LMU-Net: lightweight U-shaped network for medical image segmentation. Med Biol Eng Comput 2024; 62:61-70. [PMID: 37615845 DOI: 10.1007/s11517-023-02908-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 08/08/2023] [Indexed: 08/25/2023]
Abstract
Deep learning technology has been employed for precise medical image segmentation in recent years. However, due to the limited available datasets and real-time processing requirement, the inherently complicated structure of deep learning models restricts their application in the field of medical image processing. In this work, we present a novel lightweight LMU-Net network with improved accuracy for medical image segmentation. The multilayer perceptron (MLP) and depth-wise separable convolutions are adopted in both encoder and decoder of the LMU-Net to reduce feature loss and the number of training parameters. In addition, a lightweight channel attention mechanism and convolution operation with a larger kernel are introduced in the proposed architecture to further improve the segmentation performance. Furthermore, we employ batch normalization (BN) and group normalization (GN) interchangeably in our module to minimize the estimation shift in the network. Finally, the proposed network is evaluated and compared to other architectures on publicly accessible ISIC and BUSI datasets by carrying out robust experiments with sufficient ablation considerations. The experimental results show that the proposed LMU-Net can achieve a better overall performance than existing techniques by adopting fewer parameters.
Collapse
Affiliation(s)
- Ting Ma
- Southwest Petroleum University, Chengdu, China
| | - Ke Wang
- Southwest Petroleum University, Chengdu, China
| | - Feng Hu
- Jiangsu Citron Biotech Company Limited, Nantong, China.
| |
Collapse
|
74
|
Guo K, Cheng J, Li K, Wang L, Lv Y, Cao D. Diagnosis and detection of pneumonia using weak-label based on X-ray images: a multi-center study. BMC Med Imaging 2023; 23:209. [PMID: 38087255 PMCID: PMC10717871 DOI: 10.1186/s12880-023-01174-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
PURPOSE Development and assessment the deep learning weakly supervised algorithm for the classification and detection pneumonia via X-ray. METHODS This retrospective study analyzed two publicly available dataset that contain X-ray images of pneumonia cases and normal cases. The first dataset from Guangzhou Women and Children's Medical Center. It contains a total of 5,856 X-ray images, which are divided into training, validation, and test sets with 8:1:1 ratio for algorithm training and testing. The deep learning algorithm ResNet34 was employed to build diagnostic model. And the second public dataset were collated by researchers from Qatar University and the University of Dhaka along with collaborators from Pakistan and Malaysia and some medical doctors. A total of 1,300 images of COVID-19 positive cases, 1,300 normal images and 1,300 images of viral pneumonia for external validation. Class activation map (CAM) were used to location the pneumonia lesions. RESULTS The ResNet34 model for pneumonia detection achieved an AUC of 0.9949 [0.9910-0.9981] (with an accuracy of 98.29% a sensitivity of 99.29% and a specificity of 95.57%) in the test dataset. And for external validation dataset, the model obtained an AUC of 0.9835[0.9806-0.9864] (with an accuracy of 94.62%, a sensitivity of 92.35% and a specificity of 99.15%). Moreover, the CAM can accurately locate the pneumonia area. CONCLUSION The deep learning algorithm can accurately detect pneumonia and locate the pneumonia area based on weak supervision information, which can provide potential value for helping radiologists to improve their accuracy of detection pneumonia patients through X-ray images.
Collapse
Affiliation(s)
- Kairou Guo
- Department of Biomedical Engineering, Chinese PLA General Hospital, Beijing, 100853, P.R. China
| | - Jiangbo Cheng
- Department of Biomedical Engineering, Chinese PLA General Hospital, Beijing, 100853, P.R. China
| | - Kaiyuan Li
- Department of Biomedical Engineering, Chinese PLA General Hospital, Beijing, 100853, P.R. China
| | - Lanhui Wang
- Department of Biomedical Engineering, Chinese PLA General Hospital, Beijing, 100853, P.R. China
| | - Yadong Lv
- Department of Biomedical Engineering, Chinese PLA General Hospital, Beijing, 100853, P.R. China
| | - Desen Cao
- Department of Biomedical Engineering, Chinese PLA General Hospital, Beijing, 100853, P.R. China.
| |
Collapse
|
75
|
O’Sullivan NJ, Temperley HC, Horan MT, Corr A, Mehigan BJ, Larkin JO, McCormick PH, Kavanagh DO, Meaney JFM, Kelly ME. Radiogenomics: Contemporary Applications in the Management of Rectal Cancer. Cancers (Basel) 2023; 15:5816. [PMID: 38136361 PMCID: PMC10741704 DOI: 10.3390/cancers15245816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2023] [Revised: 12/05/2023] [Accepted: 12/11/2023] [Indexed: 12/24/2023] Open
Abstract
Radiogenomics, a sub-domain of radiomics, refers to the prediction of underlying tumour biology using non-invasive imaging markers. This novel technology intends to reduce the high costs, workload and invasiveness associated with traditional genetic testing via the development of 'imaging biomarkers' that have the potential to serve as an alternative 'liquid-biopsy' in the determination of tumour biological characteristics. Radiogenomics also harnesses the potential to unlock aspects of tumour biology which are not possible to assess by conventional biopsy-based methods, such as full tumour burden, intra-/inter-lesion heterogeneity and the possibility of providing the information of tumour biology longitudinally. Several studies have shown the feasibility of developing a radiogenomic-based signature to predict treatment outcomes and tumour characteristics; however, many lack prospective, external validation. We performed a systematic review of the current literature surrounding the use of radiogenomics in rectal cancer to predict underlying tumour biology.
Collapse
Affiliation(s)
- Niall J. O’Sullivan
- Department of Radiology, St. James’s Hospital, D08 NHY1 Dublin, Ireland; (M.T.H.)
- School of Medicine, Trinity College Dublin, D02 PN40 Dublin, Ireland
- The National Centre for Advanced Medical Imaging (CAMI), St. James’s Hospital, D08 NHY1 Dublin, Ireland
| | - Hugo C. Temperley
- Department of Surgery, St. James’s Hospital, D08 NHY1 Dublin, Ireland;
| | - Michelle T. Horan
- Department of Radiology, St. James’s Hospital, D08 NHY1 Dublin, Ireland; (M.T.H.)
- School of Medicine, Trinity College Dublin, D02 PN40 Dublin, Ireland
- The National Centre for Advanced Medical Imaging (CAMI), St. James’s Hospital, D08 NHY1 Dublin, Ireland
| | - Alison Corr
- Department of Radiology, St. James’s Hospital, D08 NHY1 Dublin, Ireland; (M.T.H.)
| | - Brian J. Mehigan
- School of Medicine, Trinity College Dublin, D02 PN40 Dublin, Ireland
- Department of Surgery, St. James’s Hospital, D08 NHY1 Dublin, Ireland;
| | - John O. Larkin
- School of Medicine, Trinity College Dublin, D02 PN40 Dublin, Ireland
- Department of Surgery, St. James’s Hospital, D08 NHY1 Dublin, Ireland;
| | - Paul H. McCormick
- School of Medicine, Trinity College Dublin, D02 PN40 Dublin, Ireland
- Department of Surgery, St. James’s Hospital, D08 NHY1 Dublin, Ireland;
| | - Dara O. Kavanagh
- Department of Surgery, Tallaght University Hospital, D24 NR0A Dublin, Ireland
- Department of Surgery, Royal College of Surgeons, D02 YN77 Dublin, Ireland
| | - James F. M. Meaney
- Department of Radiology, St. James’s Hospital, D08 NHY1 Dublin, Ireland; (M.T.H.)
- The National Centre for Advanced Medical Imaging (CAMI), St. James’s Hospital, D08 NHY1 Dublin, Ireland
| | - Michael E. Kelly
- School of Medicine, Trinity College Dublin, D02 PN40 Dublin, Ireland
- Department of Surgery, St. James’s Hospital, D08 NHY1 Dublin, Ireland;
- Trinity St. James’s Cancer Institute (TSJCI), D08 NHY1 Dublin, Ireland
| |
Collapse
|
76
|
Ahamed MF, Hossain MM, Nahiduzzaman M, Islam MR, Islam MR, Ahsan M, Haider J. A review on brain tumor segmentation based on deep learning methods with federated learning techniques. Comput Med Imaging Graph 2023; 110:102313. [PMID: 38011781 DOI: 10.1016/j.compmedimag.2023.102313] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 11/13/2023] [Accepted: 11/13/2023] [Indexed: 11/29/2023]
Abstract
Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues.
Collapse
Affiliation(s)
- Md Faysal Ahamed
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Munawar Hossain
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Rabiul Islam
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK.
| |
Collapse
|
77
|
Wu S, Cao Y, Li X, Liu Q, Ye Y, Liu X, Zeng L, Tian M. Attention-guided multi-scale context aggregation network for multi-modal brain glioma segmentation. Med Phys 2023; 50:7629-7640. [PMID: 37151131 DOI: 10.1002/mp.16452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 03/17/2023] [Accepted: 03/20/2023] [Indexed: 05/09/2023] Open
Abstract
BACKGROUND Accurate segmentation of brain glioma is a critical prerequisite for clinical diagnosis, surgical planning and treatment evaluation. In current clinical workflow, physicians typically perform delineation of brain tumor subregions slice-by-slice, which is more susceptible to variabilities in raters and also time-consuming. Besides, even though convolutional neural networks (CNNs) are driving progress, the performance of standard models still have some room for further improvement. PURPOSE To deal with these issues, this paper proposes an attention-guided multi-scale context aggregation network (AMCA-Net) for the accurate segmentation of brain glioma in the magnetic resonance imaging (MRI) images with multi-modalities. METHODS AMCA-Net extracts the multi-scale features from the MRI images and fuses the extracted discriminative features via a self-attention mechanism for brain glioma segmentation. The extraction is performed via a series of down-sampling, convolution layers, and the global context information guidance (GCIG) modules are developed to fuse the features extracted for contextual features. At the end of the down-sampling, a multi-scale fusion (MSF) module is designed to exploit and combine all the extracted multi-scale features. Each of the GCIG and MSF modules contain a channel attention (CA) module that can adaptively calibrate feature responses and emphasize the most relevant features. Finally, multiple predictions with different resolutions are fused through different weightings given by a multi-resolution adaptation (MRA) module instead of the use of averaging or max-pooling to improve the final segmentation results. RESULTS Datasets used in this paper are publicly accessible, that is, the Multimodal Brain Tumor Segmentation Challenges 2018 (BraTS2018) and 2019 (BraTS2019). BraTS2018 contains 285 patient cases and BraTS2019 contains 335 cases. Simulations show that the AMCA-Net has better or comparable performance against that of the other state-of-the-art models. In terms of the Dice score and Hausdorff 95 for the BraTS2018 dataset, 90.4% and 10.2 mm for the whole tumor region (WT), 83.9% and 7.4 mm for the tumor core region (TC), 80.2% and 4.3 mm for the enhancing tumor region (ET), whereas the Dice score and Hausdorff 95 for the BraTS2019 dataset, 91.0% and 10.7 mm for the WT, 84.2% and 8.4 mm for the TC, 80.1% and 4.8 mm for the ET. CONCLUSIONS The proposed AMCA-Net performs comparably well in comparison to several state-of-the-art neural net models in identifying the areas involving the peritumoral edema, enhancing tumor, and necrotic and non-enhancing tumor core of brain glioma, which has great potential for clinical practice. In future research, we will further explore the feasibility of applying AMCA-Net to other similar segmentation tasks.
Collapse
Affiliation(s)
- Shaozhi Wu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yunjian Cao
- Yangtze Delta Region Institute (Quzhou), University of Electronic Science and Technology of China, Quzhou, China
| | - Xinke Li
- West China School of Medicine, Sichuan University, Chengdu, China
| | - Qiyu Liu
- Radiology Department, Mianyang Central Hospital, Mianyang, China
| | - Yuyun Ye
- Department of Electrical and Computer Engineering, University of Tulsa, Tulsa, Oklahoma, USA
| | - Xingang Liu
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Liaoyuan Zeng
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Miao Tian
- School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
78
|
Hou Q, Peng Y, Wang Z, Wang J, Jiang J. MFD-Net: Modality Fusion Diffractive Network for Segmentation of Multimodal Brain Tumor Image. IEEE J Biomed Health Inform 2023; 27:5958-5969. [PMID: 37747864 DOI: 10.1109/jbhi.2023.3318640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2023]
Abstract
Automatic brain tumor segmentation using multi-parametric magnetic resonance imaging (mpMRI) holds substantial importance for brain diagnosis, monitoring, and therapeutic strategy planning. Given the constraints inherent to manual segmentation, adopting deep learning networks for accomplishing accurate and automated segmentation emerges as an essential advancement. In this article, we propose a modality fusion diffractive network (MFD-Net) composed of diffractive blocks and modality feature extractors for the automatic and accurate segmentation of brain tumors. The diffractive block, designed based on Fraunhofer's single-slit diffraction principle, emphasizes neighboring high-confidence feature points and suppresses low-quality or isolated feature points, enhancing the interrelation of features. Adopting a global passive reception mode overcomes the issue of fixed receptive fields. Through a self-supervised approach, the modality feature extractor effectively utilizes the inherent generalization information of each modality, enabling the main segmentation branch to focus more on multimodal fusion feature information. We apply the diffractive block on nn-UNet in the MICCAI BraTS 2022 challenge, ranked first in the pediatric population data and third in the BraTS continuous evaluation data, proving the superior generalizability of our network. We also train separately on the BraTS 2018, 2019, and 2021 datasets. Experiments demonstrate that the proposed network outperforms state-of-the-art methods.
Collapse
|
79
|
Zhang Y, Han Y, Zhang J. MAU-Net: Mixed attention U-Net for MRI brain tumor segmentation. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2023; 20:20510-20527. [PMID: 38124563 DOI: 10.3934/mbe.2023907] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2023]
Abstract
Computer-aided brain tumor segmentation using magnetic resonance imaging (MRI) is of great significance for the clinical diagnosis and treatment of patients. Recently, U-Net has received widespread attention as a milestone in automatic brain tumor segmentation. Following its merits and motivated by the success of the attention mechanism, this work proposed a novel mixed attention U-Net model, i.e., MAU-Net, which integrated the spatial-channel attention and self-attention into a single U-Net architecture for MRI brain tumor segmentation. Specifically, MAU-Net embeds Shuffle Attention using spatial-channel attention after each convolutional block in the encoder stage to enhance local details of brain tumor images. Meanwhile, considering the superior capability of self-attention in modeling long-distance dependencies, an enhanced Transformer module is introduced at the bottleneck to improve the interactive learning ability of global information of brain tumor images. MAU-Net achieves enhancing tumor, whole tumor and tumor core segmentation Dice values of 77.88/77.47, 90.15/90.00 and 81.09/81.63% on the brain tumor segmentation (BraTS) 2019/2020 validation datasets, and it outperforms the baseline by 1.15 and 0.93% on average, respectively. Besides, MAU-Net also demonstrates good competitiveness compared with representative methods.
Collapse
Affiliation(s)
- Yuqing Zhang
- School of Computer Science and Engineering, Dalian Minzu University, Dalian 116600, China
- Institute of Machine Intelligence and Biocomputing, Dalian Minzu University, Dalian 116600, China
| | - Yutong Han
- School of Computer Science and Engineering, Dalian Minzu University, Dalian 116600, China
- Institute of Machine Intelligence and Biocomputing, Dalian Minzu University, Dalian 116600, China
| | - Jianxin Zhang
- School of Computer Science and Engineering, Dalian Minzu University, Dalian 116600, China
- Institute of Machine Intelligence and Biocomputing, Dalian Minzu University, Dalian 116600, China
- SEAC Key Laboratory of Big Data Applied Technology, Dalian Minzu University, Dalian 116600, China
| |
Collapse
|
80
|
Wang X, Liu S, Yang N, Chen F, Ma L, Ning G, Zhang H, Qiu X, Liao H. A Segmentation Framework With Unsupervised Learning-Based Label Mapper for the Ventricular Target of Intracranial Germ Cell Tumor. IEEE J Biomed Health Inform 2023; 27:5381-5392. [PMID: 37651479 DOI: 10.1109/jbhi.2023.3310492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Intracranial germ cell tumors are rare tumors that mainly affect children and adolescents. Radiotherapy is the cornerstone of interdisciplinary treatment methods. Radiation of the whole ventricle system and the local tumor can reduce the complications in the late stage of radiotherapy while ensuring the curative effect. However, manually delineating the ventricular system is labor-intensive and time-consuming for physicians. The diverse ventricle shape and the hydrocephalus-induced ventricle dilation increase the difficulty of automatic segmentation algorithms. Therefore, this study proposed a fully automatic segmentation framework. Firstly, we designed a novel unsupervised learning-based label mapper, which is used to handle the ventricle shape variations and obtain the preliminary segmentation result. Then, to boost the segmentation performance of the framework, we improved the region growth algorithm and combined the fully connected conditional random field to optimize the preliminary results from both regional and voxel scales. In the case of only one set of annotated data is required, the average time cost is 153.01 s, and the average target segmentation accuracy can reach 84.69%. Furthermore, we verified the algorithm in practical clinical applications. The results demonstrate that our proposed method is beneficial for physicians to delineate radiotherapy targets, which is feasible and clinically practical, and may fill the gap of automatic delineation methods for the ventricular target of intracranial germ celltumors.
Collapse
|
81
|
Khan MKH, Guo W, Liu J, Dong F, Li Z, Patterson TA, Hong H. Machine learning and deep learning for brain tumor MRI image segmentation. Exp Biol Med (Maywood) 2023; 248:1974-1992. [PMID: 38102956 PMCID: PMC10798183 DOI: 10.1177/15353702231214259] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2023] Open
Abstract
Brain tumors are often fatal. Therefore, accurate brain tumor image segmentation is critical for the diagnosis, treatment, and monitoring of patients with these tumors. Magnetic resonance imaging (MRI) is a commonly used imaging technique for capturing brain images. Both machine learning and deep learning techniques are popular in analyzing MRI images. This article reviews some commonly used machine learning and deep learning techniques for brain tumor MRI image segmentation. The limitations and advantages of the reviewed machine learning and deep learning methods are discussed. Even though each of these methods has a well-established status in their individual domains, the combination of two or more techniques is currently an emerging trend.
Collapse
Affiliation(s)
- Md Kamrul Hasan Khan
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Wenjing Guo
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Jie Liu
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Fan Dong
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Zoe Li
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Tucker A Patterson
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| | - Huixiao Hong
- National Center for Toxicological Research, U.S. Food & Drug Administration, Jefferson, AR 72079, USA
| |
Collapse
|
82
|
Chen J, Meng L, Bu C, Zhang C, Wu P. Feature pyramid network-based computer-aided detection and monitoring treatment response of brain metastases on contrast-enhanced MRI. Clin Radiol 2023; 78:e808-e814. [PMID: 37573242 DOI: 10.1016/j.crad.2023.07.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 07/06/2023] [Accepted: 07/12/2023] [Indexed: 08/14/2023]
Abstract
AIM To investigate the value of feature pyramid network (FPN)-based computer-aided detection (CAD) of brain metastases (BMs) before and after non-surgical treatment, and to evaluate its performance in monitoring treatment response of BM on contrast-enhanced (CE) magnetic resonance imaging (MRI). MATERIAL AND METHODS Eighty-five cancer patients newly diagnosed with BM who had undergone initial and follow-up three-dimensional (3D) CE MRI at Liaocheng People's Hospital were included retrospectively in this study. Manual detection (MD) was performed by reviewer 1. Computer-aided detection (CAD) was performed by reviewer 2 using uAI Discover-BMs software. The treatment response was assessed by the two reviewers for each patient separately. A paired chi-square test was used to compare the differences in the detection of BM between MD and CAD. Agreement between MD and CAD in monitoring treatment response was assessed by kappa test. RESULTS The sensitivities of MD and CAD on initial 3D CE MRI were 78.65% and 99.13%, respectively. The sensitivities of MD and CAD on follow-up 3D CE MRI were 76.32% and 98.24%, respectively. There was a very good agreement between Reviewer 1 and Reviewer 2 in evaluating the treatment response of BM. CONCLUSION FPN-based CAD has a higher sensitivity of close to 100% and lower false negatives (FNs) for BM detection, compared to MD. Although CAD had a few shortcomings in reflecting changes of BMs after treatment, it had high performance in monitoring treatment response of BM on CE MRI.
Collapse
Affiliation(s)
- J Chen
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China.
| | - L Meng
- Department of Radiotherapy, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - C Bu
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - C Zhang
- Department of MR, Liaocheng People's Hospital, Liaocheng, Shandong Province, 252000, China
| | - P Wu
- Philips Healthcare, Shanghai, 200072, China
| |
Collapse
|
83
|
Boussioux L, Ma Y, Thomas NK, Bertsimas D, Shusharina N, Pursley J, Chen YL, DeLaney TF, Qian J, Bortfeld T. Automated Segmentation of Sacral Chordoma and Surrounding Muscles Using Deep Learning Ensemble. Int J Radiat Oncol Biol Phys 2023; 117:738-749. [PMID: 37451472 PMCID: PMC10665084 DOI: 10.1016/j.ijrobp.2023.03.078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2022] [Revised: 03/18/2023] [Accepted: 03/30/2023] [Indexed: 07/18/2023]
Abstract
PURPOSE The manual segmentation of organ structures in radiation oncology treatment planning is a time-consuming and highly skilled task, particularly when treating rare tumors like sacral chordomas. This study evaluates the performance of automated deep learning (DL) models in accurately segmenting the gross tumor volume (GTV) and surrounding muscle structures of sacral chordomas. METHODS AND MATERIALS An expert radiation oncologist contoured 5 muscle structures (gluteus maximus, gluteus medius, gluteus minimus, paraspinal, piriformis) and sacral chordoma GTV on computed tomography images from 48 patients. We trained 6 DL auto-segmentation models based on 3-dimensional U-Net and residual 3-dimensional U-Net architectures. We then implemented an average and an optimally weighted average ensemble to improve prediction performance. We evaluated algorithms with the average and standard deviation of the volumetric Dice similarity coefficient, surface Dice similarity coefficient with 2- and 3-mm thresholds, and average symmetric surface distance. One independent expert radiation oncologist assessed the clinical viability of the DL contours and determined the necessary amount of editing before they could be used in clinical practice. RESULTS Quantitatively, the ensembles performed the best across all structures. The optimal ensemble (volumetric Dice similarity coefficient, average symmetric surface distance) was (85.5 ± 6.4, 2.6 ± 0.8; GTV), (94.4 ± 1.5, 1.0 ± 0.4; gluteus maximus), (92.6 ± 0.9, 0.9 ± 0.1; gluteus medius), (85.0 ± 2.7, 1.1 ± 0.3; gluteus minimus), (92.1 ± 1.5, 0.8 ± 0.2; paraspinal), and (78.3 ± 5.7, 1.5 ± 0.6; piriformis). The qualitative evaluation suggested that the best model could reduce the total muscle and tumor delineation time to a 19-minute average. CONCLUSIONS Our methodology produces expert-level muscle and sacral chordoma tumor segmentation using DL and ensemble modeling. It can substantially augment the streamlining and accuracy of treatment planning and represents a critical step toward automated delineation of the clinical target volume in sarcoma and other disease sites.
Collapse
Affiliation(s)
- Leonard Boussioux
- Operations Research Center, Massachusetts Institute of Technology, Cambridge, Massachusetts; Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts; University of Washington, Michael G. Foster School of Business, Department of Information Systems and Operations Management, Seattle, Washington.
| | - Yu Ma
- Operations Research Center, Massachusetts Institute of Technology, Cambridge, Massachusetts; Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Nancy Knight Thomas
- Operations Research Center, Massachusetts Institute of Technology, Cambridge, Massachusetts; Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Dimitris Bertsimas
- Operations Research Center, Massachusetts Institute of Technology, Cambridge, Massachusetts; Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts
| | - Nadya Shusharina
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| | - Jennifer Pursley
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| | - Yen-Lin Chen
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| | - Thomas F DeLaney
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| | - Jack Qian
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| | - Thomas Bortfeld
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, Massachusetts; Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
84
|
Sun H, Yang S, Chen L, Liao P, Liu X, Liu Y, Wang N. Brain tumor image segmentation based on improved FPN. BMC Med Imaging 2023; 23:172. [PMID: 37904116 PMCID: PMC10617057 DOI: 10.1186/s12880-023-01131-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 10/19/2023] [Indexed: 11/01/2023] Open
Abstract
PURPOSE Automatic segmentation of brain tumors by deep learning algorithm is one of the research hotspots in the field of medical image segmentation. An improved FPN network for brain tumor segmentation is proposed to improve the segmentation effect of brain tumor. MATERIALS AND METHODS Aiming at the problem that the traditional full convolutional neural network (FCN) has weak processing ability, which leads to the loss of details in tumor segmentation, this paper proposes a brain tumor image segmentation method based on the improved feature pyramid networks (FPN) convolutional neural network. In order to improve the segmentation effect of brain tumors, we improved the model, introduced the FPN structure into the U-Net structure, captured the context multi-scale information by using the different scale information in the U-Net model and the multi receptive field high-level features in the FPN convolutional neural network, and improved the adaptability of the model to different scale features. RESULTS Performance evaluation indicators show that the proposed improved FPN model has 99.1% accuracy, 92% DICE rating and 86% Jaccard index. The performance of the proposed method outperforms other segmentation models in each metric. In addition, the schematic diagram of the segmentation results shows that the segmentation results of our algorithm are closer to the ground truth, showing more brain tumour details, while the segmentation results of other algorithms are smoother. CONCLUSIONS The experimental results show that this method can effectively segment brain tumor regions and has certain generalization, and the segmentation effect is better than other networks. It has positive significance for clinical diagnosis of brain tumors.
Collapse
Affiliation(s)
- Haitao Sun
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Shuai Yang
- Department of Radiotherapy and Minimally Invasive Surgery, The Cancer Center of The Fifth Affiliated Hospital of Sun Yat-Sen University, Zhuhai, 519020, China
| | - Lijuan Chen
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Pingyan Liao
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Xiangping Liu
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China
| | - Ying Liu
- Department of the Radiotherapy, The Fifth Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510060, China
| | - Ning Wang
- Department of Radiotherapy Room, Zhongshan Hospital of Traditional Chinese Medicine, ZhongShanGuangdong Province, 528400, China.
| |
Collapse
|
85
|
Lin L, Song Y, Guo W, Yu T, Fan M, Su Win NS, Li G. A multispectral transmission image cluster analysis method based on "Terrace compression Method" and window function. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2023; 306:123547. [PMID: 39492380 DOI: 10.1016/j.saa.2023.123547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2023] [Revised: 10/04/2023] [Accepted: 10/14/2023] [Indexed: 11/05/2024]
Abstract
Multispectral transmission imaging has a great potential for the early screening of breast cancer due to the low cost, safety and ease in operation. Accurate detection of heterogeneity is important for the diagnosis of breast disease. The low contrast and unclear heterogeneity boundaries of transmission images can lead to difficulties in recognition and segmentation. Therefore, we propose a clustering segmentation method of multispectral transmission images based on "terrace compression method" and window transformation. The images are preprocessed by the frame accumulation to improve the signal-to-noise ratio. After that, the "terrace compression method" is used to compress the images nonlinearly, to reduce data redundancy and then to improve the edge information of heterogeneities. Afterwards, the window function is used to eliminate the redundant information of the background and to reduce the influence of background noise on clustering. Finally, the processed images at each wavelength are transformed into multidimensional data for cluster analysis. The multispectral transmission images of breast phantom are acquired for experimental validation. Then, compared the method with common clustering segmentation methods (including K-means, K-means++, Mean-shift, Gaussian Mixture). The results showed that this processing method can effectively segment and classify the three heterogeneities in the breast phantom. Among these methods, the method proposed in the paper has the best segmentation and classification results for the three types of heterogeneities in the breast phantom. The Dice coefficients of all the heterogeneities segmentation reached more than 0.84 and increased by a maximum of 1.08 times as compared to the common clustering methods. The applications of the terrace compression method and the grayscale window transformation improved the effect of image clustering segmentation.
Collapse
Affiliation(s)
- Ling Lin
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China; Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin 300072, China.
| | - Yue Song
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China; Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin 300072, China
| | - Wenli Guo
- Shengjing Hospital of China Medical University, China
| | - Tao Yu
- Shengjing Hospital of China Medical University, China
| | - Meilin Fan
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China; Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin 300072, China
| | - Nan Su Su Win
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China; Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin 300072, China
| | - Gang Li
- State Key Laboratory of Precision Measuring Technology and Instruments, Tianjin University, Tianjin 300072, China; Tianjin Key Laboratory of Biomedical Detecting Techniques and Instruments, Tianjin University, Tianjin 300072, China
| |
Collapse
|
86
|
Bal A, Banerjee M, Chaki R, Sharma P. A robust ischemic stroke lesion segmentation technique using two-pathway 3D deep neural network in MR images. MULTIMEDIA TOOLS AND APPLICATIONS 2023; 83:41485-41524. [DOI: 10.1007/s11042-023-16689-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2022] [Revised: 06/29/2023] [Accepted: 08/27/2023] [Indexed: 04/01/2025]
|
87
|
Wu Z, Zhang X, Li F, Wang S, Li J. TransRender: a transformer-based boundary rendering segmentation network for stroke lesions. Front Neurosci 2023; 17:1259677. [PMID: 37901438 PMCID: PMC10601640 DOI: 10.3389/fnins.2023.1259677] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Accepted: 09/26/2023] [Indexed: 10/31/2023] Open
Abstract
Vision transformer architectures attract widespread interest due to their robust representation capabilities of global features. Transformer-based methods as the encoder achieve superior performance compared to convolutional neural networks and other popular networks in many segmentation tasks for medical images. Due to the complex structure of the brain and the approximate grayscale of healthy tissue and lesions, lesion segmentation suffers from over-smooth boundaries or inaccurate segmentation. Existing methods, including the transformer, utilize stacked convolutional layers as the decoder to uniformly treat each pixel as a grid, which is convenient for feature computation. However, they often neglect the high-frequency features of the boundary and focus excessively on the region features. We propose an effective method for lesion boundary rendering called TransRender, which adaptively selects a series of important points to compute the boundary features in a point-based rendering way. The transformer-based method is selected to capture global information during the encoding stage. Several renders efficiently map the encoded features of different levels to the original spatial resolution by combining global and local features. Furthermore, the point-based function is employed to supervise the render module generating points, so that TransRender can continuously refine the uncertainty region. We conducted substantial experiments on different stroke lesion segmentation datasets to prove the efficiency of TransRender. Several evaluation metrics illustrate that our method can automatically segment the stroke lesion with relatively high accuracy and low calculation complexity.
Collapse
Affiliation(s)
- Zelin Wu
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Xueying Zhang
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Fenglian Li
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Suzhe Wang
- College of Electronic Information and Optical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Jiaying Li
- The First Clinical Medical College, Shanxi Medical University, Taiyuan, China
| |
Collapse
|
88
|
Park JH, Moon HS, Jung HI, Hwang J, Choi YH, Kim JE. Deep learning and clustering approaches for dental implant size classification based on periapical radiographs. Sci Rep 2023; 13:16856. [PMID: 37803022 PMCID: PMC10558577 DOI: 10.1038/s41598-023-42385-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 09/09/2023] [Indexed: 10/08/2023] Open
Abstract
This study investigated two artificial intelligence (AI) methods for automatically classifying dental implant diameter and length based on periapical radiographs. The first method, deep learning (DL), involved utilizing the pre-trained VGG16 model and adjusting the fine-tuning degree to analyze image data obtained from periapical radiographs. The second method, clustering analysis, was accomplished by analyzing the implant-specific feature vector derived from three key points coordinates of the dental implant using the k-means++ algorithm and adjusting the weight of the feature vector. DL and clustering model classified dental implant size into nine groups. The performance metrics of AI models were accuracy, sensitivity, specificity, F1-score, positive predictive value, negative predictive value, and area under the receiver operating characteristic curve (AUC-ROC). The final DL model yielded performances above 0.994, 0.950, 0.994, 0.974, 0.952, 0.994, and 0.975, respectively, and the final clustering model yielded performances above 0.983, 0.900, 0.988, 0.923, 0.909, 0.988, and 0.947, respectively. When comparing the AI model before tuning and the final AI model, statistically significant performance improvements were observed in six out of nine groups for DL models and four out of nine groups for clustering models based on AUC-ROC. Two AI models showed reliable classification performances. For clinical applications, AI models require validation on various multicenter data.
Collapse
Affiliation(s)
- Ji-Hyun Park
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea
| | - Hong Seok Moon
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea
| | - Hoi-In Jung
- Department of Preventive Dentistry and Public Oral Health, Yonsei University College of Dentistry, Seoul, 03722, Korea
| | - JaeJoon Hwang
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Dental Research Institute, Pusan National University, Busan, 50612, Korea
| | - Yoon-Ho Choi
- School of Computer Science and Engineering, Pusan National University, Busan, 46241, Korea
| | - Jong-Eun Kim
- Department of Prosthodontics, Yonsei University College of Dentistry, Yonsei-ro 50-1, Seodaemun-gu, Seoul, 03722, Korea.
| |
Collapse
|
89
|
Zheng Y, Huang D, Hao X, Wei J, Lu H, Liu Y. UniVisNet: A Unified Visualization and Classification Network for accurate grading of gliomas from MRI. Comput Biol Med 2023; 165:107332. [PMID: 37598632 DOI: 10.1016/j.compbiomed.2023.107332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 07/30/2023] [Accepted: 08/07/2023] [Indexed: 08/22/2023]
Abstract
Accurate grading of brain tumors plays a crucial role in the diagnosis and treatment of glioma. While convolutional neural networks (CNNs) have shown promising performance in this task, their clinical applicability is still constrained by the interpretability and robustness of the models. In the conventional framework, the classification model is trained first, and then visual explanations are generated. However, this approach often leads to models that prioritize classification performance or complexity, making it difficult to achieve a precise visual explanation. Motivated by these challenges, we propose the Unified Visualization and Classification Network (UniVisNet), a novel framework that aims to improve both the classification performance and the generation of high-resolution visual explanations. UniVisNet addresses attention misalignment by introducing a subregion-based attention mechanism, which replaces traditional down-sampling operations. Additionally, multiscale feature maps are fused to achieve higher resolution, enabling the generation of detailed visual explanations. To streamline the process, we introduce the Unified Visualization and Classification head (UniVisHead), which directly generates visual explanations without the need for additional separation steps. Through extensive experiments, our proposed UniVisNet consistently outperforms strong baseline classification models and prevalent visualization methods. Notably, UniVisNet achieves remarkable results on the glioma grading task, including an AUC of 94.7%, an accuracy of 89.3%, a sensitivity of 90.4%, and a specificity of 85.3%. Moreover, UniVisNet provides visually interpretable explanations that surpass existing approaches. In conclusion, UniVisNet innovatively generates visual explanations in brain tumor grading by simultaneously improving the classification performance and generating high-resolution visual explanations. This work contributes to the clinical application of deep learning, empowering clinicians with comprehensive insights into the spatial heterogeneity of glioma.
Collapse
Affiliation(s)
- Yao Zheng
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Dong Huang
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China; Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Xiaoshuo Hao
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Jie Wei
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China
| | - Hongbing Lu
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China; Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China.
| | - Yang Liu
- Air Force Medical University, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China; Shaanxi Provincial Key Laboratory of Bioelectromagnetic Detection and Intelligent Perception, No. 169 Changle West Road, Xi'an, 710032, ShaanXi, China.
| |
Collapse
|
90
|
Zia MS, Baig UA, Rehman ZU, Yaqub M, Ahmed S, Zhang Y, Wang S, Khan R. Contextual information extraction in brain tumour segmentation. IET IMAGE PROCESSING 2023; 17:3371-3391. [DOI: 10.1049/ipr2.12869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 06/30/2023] [Indexed: 09/22/2024]
Abstract
AbstractAutomatic brain tumour segmentation in MRI scans aims to separate the brain tumour's endoscopic core, edema, non‐enhancing tumour core, peritumoral edema, and enhancing tumour core from three‐dimensional MR voxels. Due to the wide range of brain tumour intensity, shape, location, and size, it is challenging to segment these regions automatically. UNet is the prime three‐dimensional CNN network performance source for medical imaging applications like brain tumour segmentation. This research proposes a context aware 3D ARDUNet (Attentional Residual Dropout UNet) network, a modified version of UNet to take advantage of the ResNet and soft attention. A novel residual dropout block (RDB) is implemented in the analytical encoder path to replace traditional UNet convolutional blocks to extract more contextual information. A unique Attentional Residual Dropout Block (ARDB) in the decoder path utilizes skip connections and attention gates to retrieve local and global contextual information. The attention gate enabled the Network to focus on the relevant part of the input image and suppress irrelevant details. Finally, the proposed Network assessed BRATS2018, BRATS2019, and BRATS2020 to some best‐in‐class segmentation approaches. The proposed Network achieved dice scores of 0.90, 0.92, and 0.93 for the whole tumour. On BRATS2018, BRATS2019, and BRATS2020, tumour core is 0.90, 0.92, 0.93, and enhancing tumour is 0.92, 0.93, 0.94.
Collapse
Affiliation(s)
- Muhammad Sultan Zia
- Department of Computer Science NFC Institute of Engineering and Fertilizer Research Faisalabad Pakistan
- Department of Computer Science The University of Chenab Gujrat Pakistan
| | - Usman Ali Baig
- Department of Computer Science The University of Chenab Gujrat Pakistan
| | - Zaka Ur Rehman
- Department of Computer Science The University of Chenab Gujrat Pakistan
| | - Muhammad Yaqub
- Faculty of Information Technology Beijing University of Technology Beijing China
| | - Shahzad Ahmed
- Faculty of Information Technology Beijing University of Technology Beijing China
| | - Yudong Zhang
- School of Computing and Mathematical Sciences University of Leicester Leicester UK
- Department of Information Systems Faculty of Computing and Information Technology King Abdulaziz University Jeddah Saudi Arabia
| | - Shuihua Wang
- School of Computing and Mathematical Sciences University of Leicester Leicester UK
| | - Rizwan Khan
- Department of Computer Science and Technology Zhejiang Normal University, Zhejiang Jinhua China
| |
Collapse
|
91
|
Wu S, Bai X, Cai L, Wang L, Zhang X, Ke Q, Huang J. Bone tumor examination based on FCNN-4s and CRF fine segmentation fusion algorithm. J Bone Oncol 2023; 42:100502. [PMID: 37736418 PMCID: PMC10509716 DOI: 10.1016/j.jbo.2023.100502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 08/24/2023] [Accepted: 09/03/2023] [Indexed: 09/23/2023] Open
Abstract
Background and objective Bone tumor is a kind of harmful orthopedic disease, there are benign and malignant points. Aiming at the problem that the accuracy of the existing machine learning algorithm for bone tumor image segmentation is not high, a bone tumor image segmentation algorithm based on improved full convolutional neural network which consists fully convolutional neural network (FCNN-4s) and conditional random field (CRF). Methodology The improved fully convolutional neural network (FCNN-4s) was used to perform coarse segmentation on preprocessed images. Batch normalization layers were added after each convolutional layer to accelerate the convergence speed of network training and improve the accuracy of the trained model. Then, a fully connected conditional random field (CRF) was fused to refine the bone tumor boundary in the coarse segmentation results, achieving the fine segmentation effect. Results The experimental results show that compared with the traditional convolutional neural network bone tumor image segmentation algorithm, the algorithm has a great improvement in segmentation accuracy and stability, the average Dice can reach 91.56%, the real-time performance is better. Conclusion Compared with the traditional convolutional neural network segmentation algorithm, the algorithm in this paper has a more refined structure, which can effectively solve the problem of over-segmentation and under-segmentation of bone tumors. The segmentation prediction has better real-time performance, strong stability, and can achieve higher segmentation accuracy.
Collapse
Affiliation(s)
- Shiqiang Wu
- Department of Orthopedics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, China
- Department of Orthopedics, The Second Clinical College of Fujian Medical University, Quanzhou, Fujian 362000, China
| | - Xiaoming Bai
- Department of Orthopedics, The Second Clinical College of Fujian Medical University, Quanzhou, Fujian 362000, China
| | - Liquan Cai
- Department of Orthopedics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, China
| | - Liangming Wang
- Department of Orthopedics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, China
| | - XiaoLu Zhang
- Department of Orthopedics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, China
| | - Qingfeng Ke
- Department of Orthopedics, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, China
| | - Jianlong Huang
- Faculty of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou 362000, China
- Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou 362000, China
- Key Laboratory of Intelligent Computing and Information Processing, Fujian Province University, Quanzhou 362000, China
| |
Collapse
|
92
|
Choi Y, Al-Masni MA, Jung KJ, Yoo RE, Lee SY, Kim DH. A single stage knowledge distillation network for brain tumor segmentation on limited MR image modalities. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 240:107644. [PMID: 37307766 DOI: 10.1016/j.cmpb.2023.107644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Revised: 05/14/2023] [Accepted: 06/03/2023] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Precisely segmenting brain tumors using multimodal Magnetic Resonance Imaging (MRI) is an essential task for early diagnosis, disease monitoring, and surgical planning. Unfortunately, the complete four image modalities utilized in the well-known BraTS benchmark dataset: T1, T2, Fluid-Attenuated Inversion Recovery (FLAIR), and T1 Contrast-Enhanced (T1CE) are not regularly acquired in clinical practice due to the high cost and long acquisition time. Rather, it is common to utilize limited image modalities for brain tumor segmentation. METHODS In this paper, we propose a single stage learning of knowledge distillation algorithm that derives information from the missing modalities for better segmentation of brain tumors. Unlike the previous works that adopted a two-stage framework to distill the knowledge from a pre-trained network into a student network, where the latter network is trained on limited image modality, we train both models simultaneously using a single-stage knowledge distillation algorithm. We transfer the information by reducing the redundancy from a teacher network trained on full image modalities to the student network using Barlow Twins loss on a latent-space level. To distill the knowledge on the pixel level, we further employ a deep supervision idea that trains the backbone networks of both teacher and student paths using Cross-Entropy loss. RESULTS We demonstrate that the proposed single-stage knowledge distillation approach enables improving the performance of the student network in each tumor category with overall dice scores of 91.11% for Tumor Core, 89.70% for Enhancing Tumor, and 92.20% for Whole Tumor in the case of only using the FLAIR and T1CE images, outperforming the state-of-the-art segmentation methods. CONCLUSIONS The outcomes of this work prove the feasibility of exploiting the knowledge distillation in segmenting brain tumors using limited image modalities and hence make it closer to clinical practices.
Collapse
Affiliation(s)
- Yoonseok Choi
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul 03722, Republic of Korea
| | - Mohammed A Al-Masni
- Department of Artificial Intelligence, College of Software & Convergence Technology, Daeyang AI Center, Sejong University, Seoul 05006, Republic of Korea
| | - Kyu-Jin Jung
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul 03722, Republic of Korea
| | - Roh-Eul Yoo
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro Jongno-gu, Seoul 03080, Republic of Korea; Department of Radiology, Seoul National University College of Medicine, 103 Daehak-ro Jongno-gu, Seoul 03080, Republic of Korea
| | - Seong-Yeong Lee
- Department of Radiology, Seoul National University Hospital, 101 Daehak-ro Jongno-gu, Seoul 03080, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul 03722, Republic of Korea.
| |
Collapse
|
93
|
Kaifi R. A Review of Recent Advances in Brain Tumor Diagnosis Based on AI-Based Classification. Diagnostics (Basel) 2023; 13:3007. [PMID: 37761373 PMCID: PMC10527911 DOI: 10.3390/diagnostics13183007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 09/14/2023] [Accepted: 09/19/2023] [Indexed: 09/29/2023] Open
Abstract
Uncontrolled and fast cell proliferation is the cause of brain tumors. Early cancer detection is vitally important to save many lives. Brain tumors can be divided into several categories depending on the kind, place of origin, pace of development, and stage of progression; as a result, tumor classification is crucial for targeted therapy. Brain tumor segmentation aims to delineate accurately the areas of brain tumors. A specialist with a thorough understanding of brain illnesses is needed to manually identify the proper type of brain tumor. Additionally, processing many images takes time and is tiresome. Therefore, automatic segmentation and classification techniques are required to speed up and enhance the diagnosis of brain tumors. Tumors can be quickly and safely detected by brain scans using imaging modalities, including computed tomography (CT), magnetic resonance imaging (MRI), and others. Machine learning (ML) and artificial intelligence (AI) have shown promise in developing algorithms that aid in automatic classification and segmentation utilizing various imaging modalities. The right segmentation method must be used to precisely classify patients with brain tumors to enhance diagnosis and treatment. This review describes multiple types of brain tumors, publicly accessible datasets, enhancement methods, segmentation, feature extraction, classification, machine learning techniques, deep learning, and learning through a transfer to study brain tumors. In this study, we attempted to synthesize brain cancer imaging modalities with automatically computer-assisted methodologies for brain cancer characterization in ML and DL frameworks. Finding the current problems with the engineering methodologies currently in use and predicting a future paradigm are other goals of this article.
Collapse
Affiliation(s)
- Reham Kaifi
- Department of Radiological Sciences, College of Applied Medical Sciences, King Saud bin Abdulaziz University for Health Sciences, Jeddah City 22384, Saudi Arabia;
- King Abdullah International Medical Research Center, Jeddah City 22384, Saudi Arabia
- Medical Imaging Department, Ministry of the National Guard—Health Affairs, Jeddah City 11426, Saudi Arabia
| |
Collapse
|
94
|
Farzana W, Basree MM, Diawara N, Shboul ZA, Dubey S, Lockhart MM, Hamza M, Palmer JD, Iftekharuddin KM. Prediction of Rapid Early Progression and Survival Risk with Pre-Radiation MRI in WHO Grade 4 Glioma Patients. Cancers (Basel) 2023; 15:4636. [PMID: 37760604 PMCID: PMC10526762 DOI: 10.3390/cancers15184636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2023] [Revised: 09/09/2023] [Accepted: 09/14/2023] [Indexed: 09/29/2023] Open
Abstract
Recent clinical research describes a subset of glioblastoma patients that exhibit REP prior to the start of radiation therapy. Current literature has thus far described this population using clinicopathologic features. To our knowledge, this study is the first to investigate the potential of conventional radiomics, sophisticated multi-resolution fractal texture features, and different molecular features (MGMT, IDH mutations) as a diagnostic and prognostic tool for prediction of REP from non-REP cases using computational and statistical modeling methods. The radiation-planning T1 post-contrast (T1C) MRI sequences of 70 patients are analyzed. An ensemble method with 5-fold cross-validation over 1000 iterations offers an AUC of 0.793 ± 0.082 for REP versus non-REP classification. In addition, copula-based modeling under dependent censoring (where a subset of the patients may not be followed up with until death) identifies significant features (p-value < 0.05) for survival probability and prognostic grouping of patient cases. The prediction of survival for the patients' cohort produces a precision of 0.881 ± 0.056. The prognostic index (PI) calculated using the fused features shows that 84.62% of REP cases fall under the bad prognostic group, suggesting the potential of fused features for predicting a higher percentage of REP cases. The experimental results further show that multi-resolution fractal texture features perform better than conventional radiomics features for prediction of REP and survival outcomes.
Collapse
Affiliation(s)
- Walia Farzana
- Vision Lab, Department of Electrical & Computer Engineering, Old Dominion University, Norfolk, VA 23529, USA; (W.F.); (Z.A.S.)
| | - Mustafa M. Basree
- Department of Internal Medicine, OhioHealth Riverside Methodist Hospital, Columbus, OH 43214, USA; (M.M.B.); (S.D.)
| | - Norou Diawara
- Department of Mathematics & Statistics, Old Dominion University, Norfolk, VA 23529, USA;
| | - Zeina A. Shboul
- Vision Lab, Department of Electrical & Computer Engineering, Old Dominion University, Norfolk, VA 23529, USA; (W.F.); (Z.A.S.)
| | - Sagel Dubey
- Department of Internal Medicine, OhioHealth Riverside Methodist Hospital, Columbus, OH 43214, USA; (M.M.B.); (S.D.)
| | | | - Mohamed Hamza
- Department of Neurology, OhioHealth, Columbus, OH 43214, USA;
| | - Joshua D. Palmer
- Department of Radiation Oncology, The James Cancer Hospital and Solove Research Institute, Ohio State University Wexner Medical Center, Columbus, OH 43210, USA;
| | - Khan M. Iftekharuddin
- Vision Lab, Department of Electrical & Computer Engineering, Old Dominion University, Norfolk, VA 23529, USA; (W.F.); (Z.A.S.)
| |
Collapse
|
95
|
Chen C, Du X, Yang L, Liu H, Li Z, Gou Z, Qi J. Research on application of radiomics in glioma: a bibliometric and visual analysis. Front Oncol 2023; 13:1083080. [PMID: 37771434 PMCID: PMC10523166 DOI: 10.3389/fonc.2023.1083080] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 08/16/2023] [Indexed: 09/30/2023] Open
Abstract
Background With the continuous development of medical imaging informatics technology, radiomics has become a new and evolving field in medical applications. Radiomics aims to be an aid to support clinical decision making by extracting quantitative features from medical images and has a very wide range of applications. The purpose of this study was to perform a bibliometric and visual analysis of scientific results and research trends in the research application of radiomics in glioma. Methods We searched the Web of Science Core Collection (WOScc) for publications related to glioma radiomics. A bibliometric and visual analysis of online publications in this field related to countries/regions, authors, journals, references and keywords was performed using CiteSpace and R software. Results A total of 587 relevant literature published from 2012 to September 2022 were retrieved in WOScc, and finally a total of 484 publications were obtained according to the filtering criteria, including 393 (81.20%) articles and 91 (18.80%) reviews. The number of relevant publications increases year by year. The highest number of publications was from the USA (171 articles, 35.33%) and China (170 articles, 35.12%). The research institution with the highest number of publications was Chinese Acad Sci (24), followed by Univ Penn (22) and Fudan Univ (21). WANG Y (27) had the most publications, followed by LI Y (22), and WANG J (20). Among the 555 co-cited authors, LOUIS DN (207) and KICKINGEREDER P (207) were the most cited authors. FRONTIERS IN ONCOLOGY (42) was the most published journal and NEURO-ONCOLOGY (412) was the most co-cited journal. The most frequent keywords in all publications included glioblastoma (187), survival (136), classification (131), magnetic resonance imaging (113), machine learning (100), tumor (82), and feature (79), central nervous system (66), IDH (57), and radiomics (55). Cluster analysis was performed on the basis of keyword co-occurrence, and a total of 16 clusters were formed, indicating that these directions are the current hotspots of radiomics research applications in glioma and may be the future directions of continuous development. Conclusion In the past decade, radiomics has received much attention in the medical field and has been widely used in clinical research applications. Cooperation and communication between countries/regions need to be enhanced in future research to promote the development of radiomics in the field of medicine. In addition, the application of radiomics has improved the accuracy of pre-treatment diagnosis, efficacy prediction and prognosis assessment of glioma and helped to promote the development into precision medicine, the future still faces many challenges.
Collapse
Affiliation(s)
- Chunbao Chen
- Department of Neurosurgery, Afiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Xue Du
- Department of Oncology, The People's Hospital of Hechuan, Chongqing, China
- Department of Oncology, North Sichuan Medical College, Nanchong, China
| | - Lu Yang
- Department of Oncology, Suining Central Hospital, Suining, China
| | - Hongjun Liu
- Department of Neurosurgery, Afiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Zhou Li
- Department of Neurosurgery, Nanchong Central Hospital, The Afiliated Nanchong Central Hospital of North Sichuan Medical College, Nanchong, China
| | - Zhangyang Gou
- Department of Neurosurgery, Afiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Jian Qi
- Department of Neurosurgery, Afiliated Hospital of North Sichuan Medical College, Nanchong, China
| |
Collapse
|
96
|
Dalal S, Lilhore UK, Manoharan P, Rani U, Dahan F, Hajjej F, Keshta I, Sharma A, Simaiya S, Raahemifar K. An Efficient Brain Tumor Segmentation Method Based on Adaptive Moving Self-Organizing Map and Fuzzy K-Mean Clustering. SENSORS (BASEL, SWITZERLAND) 2023; 23:7816. [PMID: 37765873 PMCID: PMC10537273 DOI: 10.3390/s23187816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 04/26/2023] [Accepted: 05/02/2023] [Indexed: 09/29/2023]
Abstract
Brain tumors in Magnetic resonance image segmentation is challenging research. With the advent of a new era and research into machine learning, tumor detection and segmentation generated significant interest in the research world. This research presents an efficient tumor detection and segmentation technique using an adaptive moving self-organizing map and Fuzzyk-mean clustering (AMSOM-FKM). The proposed method mainly focused on tumor segmentation using extraction of the tumor region. AMSOM is an artificial neural technique whose training is unsupervised. This research utilized the online Kaggle Brats-18 brain tumor dataset. This dataset consisted of 1691 images. The dataset was partitioned into 70% training, 20% testing, and 10% validation. The proposed model was based on various phases: (a) removal of noise, (b) selection of feature attributes, (c) image classification, and (d) tumor segmentation. At first, the MR images were normalized using the Wiener filtering method, and the Gray level co-occurrences matrix (GLCM) was used to extract the relevant feature attributes. The tumor images were separated from non-tumor images using the AMSOM classification approach. At last, the FKM was used to distinguish the tumor region from the surrounding tissue. The proposed AMSOM-FKM technique and existing methods, i.e., Fuzzy-C-means and K-mean (FMFCM), hybrid self-organization mapping-FKM, were implemented over MATLAB and compared based on comparison parameters, i.e., sensitivity, precision, accuracy, and similarity index values. The proposed technique achieved more than 10% better results than existing methods.
Collapse
Affiliation(s)
- Surjeet Dalal
- Department of Computer Science and Engineering, Amity University Gurugram, Gurugram 122412, Haryana, India
| | - Umesh Kumar Lilhore
- Department of Computer Science and Engineering, Chandigarh University, Mohali 140413, Punjab, India
| | - Poongodi Manoharan
- College of Science and Engineering, Hamad Bin Khalifa University, Qatar Foundation, Doha P.O. Box 5825, Qatar
| | - Uma Rani
- Department of Computer Science and Engineering, World College of Technology & Management, Gurugram 122413, Haryana, India
| | - Fadl Dahan
- Department of Management Information Systems, College of Business Administration Hawtat Bani Tamim, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Fahima Hajjej
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Ismail Keshta
- Computer Science and Information Systems Department, College of Applied Sciences, AlMaarefa University, Riyadh 13713, Saudi Arabia
| | - Ashish Sharma
- Department of Computer Engineering and Applications, GLA University, Mathura 281406, Uttar Pradesh, India
| | - Sarita Simaiya
- Apex Institute of Technology (CSE), Chandigarh University, Gharuan, Mohali 140413, Punjab, India
| | - Kaamran Raahemifar
- Data Science and Artificial Intelligence Program, College of Information Sciences and Technology, Penn State University, State College, PS 16801, USA
- School of Optometry and Vision Science, Faculty of Science, University of Waterloo, 200 University, Waterloo, ON N2L 3G1, Canada
- Faculty of Engineering, University of Waterloo, 200 University Ave. W., Waterloo, ON N2L 3G1, Canada
| |
Collapse
|
97
|
Zhao J, Xing Z, Chen Z, Wan L, Han T, Fu H, Zhu L. Uncertainty-Aware Multi-Dimensional Mutual Learning for Brain and Brain Tumor Segmentation. IEEE J Biomed Health Inform 2023; 27:4362-4372. [PMID: 37155398 DOI: 10.1109/jbhi.2023.3274255] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Existing segmentation methods for brain MRI data usually leverage 3D CNNs on 3D volumes or employ 2D CNNs on 2D image slices. We discovered that while volume-based approaches well respect spatial relationships across slices, slice-based methods typically excel at capturing fine local features. Furthermore, there is a wealth of complementary information between their segmentation predictions. Inspired by this observation, we develop an Uncertainty-aware Multi-dimensional Mutual learning framework to learn different dimensional networks simultaneously, each of which provides useful soft labels as supervision to the others, thus effectively improving the generalization ability. Specifically, our framework builds upon a 2D-CNN, a 2.5D-CNN, and a 3D-CNN, while an uncertainty gating mechanism is leveraged to facilitate the selection of qualified soft labels, so as to ensure the reliability of shared information. The proposed method is a general framework and can be applied to varying backbones. The experimental results on three datasets demonstrate that our method can significantly enhance the performance of the backbone network by notable margins, achieving a Dice metric improvement of 2.8% on MeniSeg, 1.4% on IBSR, and 1.3% on BraTS2020.
Collapse
|
98
|
Liu Y, Wu M. Deep learning in precision medicine and focus on glioma. Bioeng Transl Med 2023; 8:e10553. [PMID: 37693051 PMCID: PMC10486341 DOI: 10.1002/btm2.10553] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 04/13/2023] [Accepted: 05/08/2023] [Indexed: 09/12/2023] Open
Abstract
Deep learning (DL) has been successfully applied to different fields for a range of tasks. In medicine, DL methods have been also used to improve the efficiency of disease diagnosis. In this review, we first summarize the history of the development of artificial intelligence models, demonstrate the features of the subtypes of machine learning and different DL networks, and then explore their application in the different fields of precision medicine, such as cardiology, gastroenterology, ophthalmology, dermatology, and oncology. By digging more information and extracting multilevel features from medical data, we found that DL helps doctors assess diseases automatically and monitor patients' physical health. In gliomas, research regarding application prospect of DL was mainly shown through magnetic resonance imaging and then by pathological slides. However, multi-omics data, such as whole exome sequence, RNA sequence, proteomics, and epigenomics, have not been covered thus far. In general, the quality and quantity of DL datasets still need further improvements, and more fruitful multi-omics characteristics will bring more comprehensive and accurate diagnosis in precision medicine and glioma.
Collapse
Affiliation(s)
- Yihao Liu
- Hunan Key Laboratory of Cancer Metabolism, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaHunanChina
- NHC Key Laboratory of Carcinogenesis, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research InstituteCentral South UniversityChangshaHunanChina
| | - Minghua Wu
- Hunan Key Laboratory of Cancer Metabolism, Hunan Cancer Hospital and the Affiliated Cancer Hospital of Xiangya School of MedicineCentral South UniversityChangshaHunanChina
- NHC Key Laboratory of Carcinogenesis, Xiangya HospitalCentral South UniversityChangshaHunanChina
- Key Laboratory of Carcinogenesis and Cancer Invasion of the Chinese Ministry of Education, Cancer Research InstituteCentral South UniversityChangshaHunanChina
| |
Collapse
|
99
|
Zhang X, Xie W, Huang C, Zhang Y, Chen X, Tian Q, Wang Y. Self-Supervised Tumor Segmentation With Sim2Real Adaptation. IEEE J Biomed Health Inform 2023; 27:4373-4384. [PMID: 37022235 DOI: 10.1109/jbhi.2023.3240844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
This paper targets on self-supervised tumor segmentation. We make the following contributions: (i) we take inspiration from the observation that tumors are often characterised independently of their contexts, we propose a novel proxy task "layer-decomposition", that closely matches the goal of the downstream task, and design a scalable pipeline for generating synthetic tumor data for pre-training; (ii) we propose a two-stage Sim2Real training regime for unsupervised tumor segmentation, where we first pre-train a model with simulated tumors, and then adopt a self-training strategy for downstream data adaptation; (iii) when evaluating on different tumor segmentation benchmarks, e.g. BraTS2018 for brain tumor segmentation and LiTS2017 for liver tumor segmentation, our approach achieves state-of-the-art segmentation performance under the unsupervised setting. While transferring the model for tumor segmentation under a low-annotation regime, the proposed approach also outperforms all existing self-supervised approaches; (iv) we conduct extensive ablation studies to analyse the critical components in data simulation, and validate the necessity of different proxy tasks. We demonstrate that, with sufficient texture randomization in simulation, model trained on synthetic data can effortlessly generalise to datasets with real tumors.
Collapse
|
100
|
Sun Q, Yang J, Ma S, Huang Y, Yuan Y, Hou Y. 3D vessel extraction using a scale-adaptive hybrid parametric tracker. Med Biol Eng Comput 2023; 61:2467-2480. [PMID: 37184591 DOI: 10.1007/s11517-023-02815-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 02/28/2023] [Indexed: 05/16/2023]
Abstract
3D vessel extraction has great significance in the diagnosis of vascular diseases. However, accurate extraction of vessels from computed tomography angiography (CTA) data is challenging. For one thing, vessels in different body parts have a wide range of scales and large curvatures; for another, the intensity distributions of vessels in different CTA data vary considerably. Besides, surrounding interfering tissue, like bones or veins with similar intensity, also seriously affects vessel extraction. Considering all the above imaging and structural features of vessels, we propose a new scale-adaptive hybrid parametric tracker (SAHPT) to extract arbitrary vessels of different body parts. First, a geometry-intensity parametric model is constructed to calculate the geometry-intensity response. While geometry parameters are calculated to adapt to the variation in scale, intensity parameters can also be estimated to meet non-uniform intensity distributions. Then, a gradient parametric model is proposed to calculate the gradient response based on a multiscale symmetric normalized gradient filter which can effectively separate the target vessel from surrounding interfering tissue. Last, a hybrid parametric model that combines the geometry-intensity and gradient parametric models is constructed to evaluate how well it fits a local image patch. In the extraction process, a multipath spherical sampling strategy is used to solve the problem of anatomical complexity. We have conducted many quantitative experiments using the synthetic and clinical CTA data, asserting its superior performance compared to traditional or deep learning-based baselines.
Collapse
Affiliation(s)
- Qi Sun
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.
| | - Shuang Ma
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Yan Huang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Yuliang Yuan
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
- School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China
| | - Yang Hou
- Department of Radiology, ShengJing Hospital of China Medical University, Shenyang, Liaoning, China
| |
Collapse
|