1
|
Belwal P, Singh S. Deep Learning techniques to detect and analysis of multiple sclerosis through MRI: A systematic literature review. Comput Biol Med 2025; 185:109530. [PMID: 39693692 DOI: 10.1016/j.compbiomed.2024.109530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 10/30/2024] [Accepted: 12/03/2024] [Indexed: 12/20/2024]
Abstract
Deep learning (DL) techniques represent a rapidly advancing field within artificial intelligence, gaining significant prominence in the detection and analysis of various medical conditions through the analysis of medical data. This study presents a systematic literature review (SLR) focused on deep learning methods for the detection and analysis of multiple sclerosis (MS) using magnetic resonance imaging (MRI). The initial search identified 401 articles, which were rigorously screened, a selection of 82 highly relevant studies. These selected studies primarily concentrate on key areas such as multiple sclerosis, deep learning, convolutional neural networks (CNN), lesion segmentation, and classification, reflecting their alignment with the current state of the art. This review comprehensively examines diverse deep-learning approaches for MS detection and analysis, offering a valuable resource for researchers. Additionally, it presents key insights by summarizing these DL techniques for MS detection and analysis using MRI in a structured tabular format.
Collapse
Affiliation(s)
- Priyanka Belwal
- Department of Computer Science and Engineering, NIT Uttarakhand, India.
| | - Surendra Singh
- Department of Computer Science and Engineering, NIT Uttarakhand, India.
| |
Collapse
|
2
|
Rai HM, Yoo J, Agarwal S, Agarwal N. LightweightUNet: Multimodal Deep Learning with GAN-Augmented Imaging Data for Efficient Breast Cancer Detection. Bioengineering (Basel) 2025; 12:73. [PMID: 39851348 PMCID: PMC11761908 DOI: 10.3390/bioengineering12010073] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Revised: 01/06/2025] [Accepted: 01/08/2025] [Indexed: 01/26/2025] Open
Abstract
Breast cancer ranks as the second most prevalent cancer globally and is the most frequently diagnosed cancer among women; therefore, early, automated, and precise detection is essential. Most AI-based techniques for breast cancer detection are complex and have high computational costs. Hence, to overcome this challenge, we have presented the innovative LightweightUNet hybrid deep learning (DL) classifier for the accurate classification of breast cancer. The proposed model boasts a low computational cost due to its smaller number of layers in its architecture, and its adaptive nature stems from its use of depth-wise separable convolution. We have employed a multimodal approach to validate the model's performance, using 13,000 images from two distinct modalities: mammogram imaging (MGI) and ultrasound imaging (USI). We collected the multimodal imaging datasets from seven different sources, including the benchmark datasets DDSM, MIAS, INbreast, BrEaST, BUSI, Thammasat, and HMSS. Since the datasets are from various sources, we have resized them to the uniform size of 256 × 256 pixels and normalized them using the Box-Cox transformation technique. Since the USI dataset is smaller, we have applied the StyleGAN3 model to generate 10,000 synthetic ultrasound images. In this work, we have performed two separate experiments: the first on a real dataset without augmentation and the second on a real + GAN-augmented dataset using our proposed method. During the experiments, we used a 5-fold cross-validation method, and our proposed model obtained good results on the real dataset (87.16% precision, 86.87% recall, 86.84% F1-score, and 86.87% accuracy) without adding any extra data. Similarly, the second experiment provides better performance on the real + GAN-augmented dataset (96.36% precision, 96.35% recall, 96.35% F1-score, and 96.35% accuracy). This multimodal approach, which utilizes LightweightUNet, enhances the performance by 9.20% in precision, 9.48% in recall, 9.51% in F1-score, and a 9.48% increase in accuracy on the combined dataset. The LightweightUNet model we proposed works very well thanks to a creative network design, adding fake images to the data, and a multimodal training method. These results show that the model has a lot of potential for use in clinical settings.
Collapse
Affiliation(s)
- Hari Mohan Rai
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea;
| | - Joon Yoo
- School of Computing, Gachon University, Seongnam 13120, Republic of Korea;
| | - Saurabh Agarwal
- Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| | - Neha Agarwal
- School of Chemical Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea
| |
Collapse
|
3
|
Szekely-Kohn AC, Castellani M, Espino DM, Baronti L, Ahmed Z, Manifold WGK, Douglas M. Machine learning for refining interpretation of magnetic resonance imaging scans in the management of multiple sclerosis: a narrative review. ROYAL SOCIETY OPEN SCIENCE 2025; 12:241052. [PMID: 39845718 PMCID: PMC11750376 DOI: 10.1098/rsos.241052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 10/23/2024] [Accepted: 11/17/2024] [Indexed: 01/24/2025]
Abstract
Multiple sclerosis (MS) is an autoimmune disease of the brain and spinal cord with both inflammatory and neurodegenerative features. Although advances in imaging techniques, particularly magnetic resonance imaging (MRI), have improved the process of diagnosis, its cause is unknown, a cure remains elusive and the evidence base to guide treatment is lacking. Computational techniques like machine learning (ML) have started to be used to understand MS. Published MS MRI-based computational studies can be divided into five categories: automated diagnosis; differentiation between lesion types and/or MS stages; differential diagnosis; monitoring and predicting disease progression; and synthetic MRI dataset generation. Collectively, these approaches show promise in assisting with MS diagnosis, monitoring of disease activity and prediction of future progression, all potentially contributing to disease management. Analysis quality using ML is highly dependent on the dataset size and variability used for training. Wider public access would mean larger datasets for experimentation, resulting in higher-quality analysis, permitting for more conclusive research. This narrative review provides an outline of the fundamentals of MS pathology and pathogenesis, diagnostic techniques and data types in computational analysis, as well as collating literature pertaining to the application of computational techniques to MRI towards developing a better understanding of MS.
Collapse
Affiliation(s)
- Adam C. Szekely-Kohn
- School of Engineering, University of Birmingham, Edgbaston, BirminghamB15 2TT, UK
| | - Marco Castellani
- School of Engineering, University of Birmingham, Edgbaston, BirminghamB15 2TT, UK
| | - Daniel M. Espino
- School of Engineering, University of Birmingham, Edgbaston, BirminghamB15 2TT, UK
| | - Luca Baronti
- School of Computer Science, University of Birmingham, Edgbaston, BirminghamB15 2TT, UK
| | - Zubair Ahmed
- University Hospitals Birmingham NHS Foundation Trust, Edgbaston, BirminghamB15 2GW, UK
- Institute of Inflammation and Ageing, University of Birmingham, Edgbaston, BirminghamB15 2TT, UK
| | | | - Michael Douglas
- University Hospitals Birmingham NHS Foundation Trust, Edgbaston, BirminghamB15 2GW, UK
- Institute of Inflammation and Ageing, University of Birmingham, Edgbaston, BirminghamB15 2TT, UK
- Department of Neurology, Dudley Group NHS Foundation Trust, Russells Hall Hospital, BirminghamDY1 2HQ, UK
- School of Life and Health Sciences, Aston University, Birmingham, UK
| |
Collapse
|
4
|
Wang T, Wen Y, Wang Z. nnU-Net based segmentation and 3D reconstruction of uterine fibroids with MRI images for HIFU surgery planning. BMC Med Imaging 2024; 24:233. [PMID: 39243001 PMCID: PMC11380377 DOI: 10.1186/s12880-024-01385-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 08/01/2024] [Indexed: 09/09/2024] Open
Abstract
High-Intensity Focused Ultrasound (HIFU) ablation represents a rapidly advancing non-invasive treatment modality that has achieved considerable success in addressing uterine fibroids, which constitute over 50% of benign gynecological tumors. Preoperative Magnetic Resonance Imaging (MRI) plays a pivotal role in the planning and guidance of HIFU surgery for uterine fibroids, wherein the segmentation of tumors holds critical significance. The segmentation process was previously manually executed by medical experts, entailing a time-consuming and labor-intensive procedure heavily reliant on clinical expertise. This study introduced deep learning-based nnU-Net models, offering a cost-effective approach for their application in the segmentation of uterine fibroids utilizing preoperative MRI images. Furthermore, 3D reconstruction of the segmented targets was implemented to guide HIFU surgery. The evaluation of segmentation and 3D reconstruction performance was conducted with a focus on enhancing the safety and effectiveness of HIFU surgery. Results demonstrated the nnU-Net's commendable performance in the segmentation of uterine fibroids and their surrounding organs. Specifically, 3D nnU-Net achieved Dice Similarity Coefficients (DSC) of 92.55% for the uterus, 95.63% for fibroids, 92.69% for the spine, 89.63% for the endometrium, 97.75% for the bladder, and 90.45% for the urethral orifice. Compared to other state-of-the-art methods such as HIFUNet, U-Net, R2U-Net, ConvUNeXt and 2D nnU-Net, 3D nnU-Net demonstrated significantly higher DSC values, highlighting its superior accuracy and robustness. In conclusion, the efficacy of the 3D nnU-Net model for automated segmentation of the uterus and its surrounding organs was robustly validated. When integrated with intra-operative ultrasound imaging, this segmentation method and 3D reconstruction hold substantial potential to enhance the safety and efficiency of HIFU surgery in the clinical treatment of uterine fibroids.
Collapse
Affiliation(s)
- Ting Wang
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016, China
| | - Yingang Wen
- National Engineering Research Center of Ultrasonic Medicine, Chongqing, 401121, China
| | - Zhibiao Wang
- State Key Laboratory of Ultrasound in Medicine and Engineering, College of Biomedical Engineering, Chongqing Medical University, Chongqing, 400016, China.
| |
Collapse
|
5
|
Musall BC, Gabr RE, Yang Y, Kamali A, Lincoln JA, Jacobs MA, Ly V, Luo X, Wolinsky JS, Narayana PA, Hasan KM. Detection of diffusely abnormal white matter in multiple sclerosis on multiparametric brain MRI using semi-supervised deep learning. Sci Rep 2024; 14:17157. [PMID: 39060426 PMCID: PMC11282266 DOI: 10.1038/s41598-024-67722-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 07/15/2024] [Indexed: 07/28/2024] Open
Abstract
In addition to focal lesions, diffusely abnormal white matter (DAWM) is seen on brain MRI of multiple sclerosis (MS) patients and may represent early or distinct disease processes. The role of MRI-observed DAWM is understudied due to a lack of automated assessment methods. Supervised deep learning (DL) methods are highly capable in this domain, but require large sets of labeled data. To overcome this challenge, a DL-based network (DAWM-Net) was trained using semi-supervised learning on a limited set of labeled data for segmentation of DAWM, focal lesions, and normal-appearing brain tissues on multiparametric MRI. DAWM-Net segmentation performance was compared to a previous intensity thresholding-based method on an independent test set from expert consensus (N = 25). Segmentation overlap by Dice Similarity Coefficient (DSC) and Spearman correlation of DAWM volumes were assessed. DAWM-Net showed DSC > 0.93 for normal-appearing brain tissues and DSC > 0.81 for focal lesions. For DAWM-Net, the DAWM DSC was 0.49 ± 0.12 with a moderate volume correlation (ρ = 0.52, p < 0.01). The previous method showed lower DAWM DSC of 0.26 ± 0.08 and lacked a significant volume correlation (ρ = 0.23, p = 0.27). These results demonstrate the feasibility of DL-based DAWM auto-segmentation with semi-supervised learning. This tool may facilitate future investigation of the role of DAWM in MS.
Collapse
Affiliation(s)
- Benjamin C Musall
- Department of Diagnostic and Interventional Imaging, University of Texas McGovern Medical School, 6431 Fannin St., MSE 168, Houston, TX, 77030, USA
| | - Refaat E Gabr
- Department of Diagnostic and Interventional Imaging, University of Texas McGovern Medical School, 6431 Fannin St., MSE 168, Houston, TX, 77030, USA
| | - Yanyu Yang
- Department of Biostatistics and Data Science, University of Texas School of Public Health, Houston, TX, USA
| | - Arash Kamali
- Department of Diagnostic and Interventional Imaging, University of Texas McGovern Medical School, 6431 Fannin St., MSE 168, Houston, TX, 77030, USA
| | - John A Lincoln
- Department of Neurology, University of Texas McGovern Medical School, Houston, TX, USA
| | - Michael A Jacobs
- Department of Diagnostic and Interventional Imaging, University of Texas McGovern Medical School, 6431 Fannin St., MSE 168, Houston, TX, 77030, USA
- The Russell H. Morgan Department of Radiology and Radiological Science and Sidney Kimmel Comprehensive Cancer Center, The Johns Hopkins School of Medicine, Baltimore, MD, USA
- Department of Computer Science, Rice University, Houston, TX, USA
| | - Vi Ly
- Department of Biostatistics and Data Science, University of Texas School of Public Health, Houston, TX, USA
| | - Xi Luo
- Department of Biostatistics and Data Science, University of Texas School of Public Health, Houston, TX, USA
| | - Jerry S Wolinsky
- Department of Neurology, University of Texas McGovern Medical School, Houston, TX, USA
| | - Ponnada A Narayana
- Department of Diagnostic and Interventional Imaging, University of Texas McGovern Medical School, 6431 Fannin St., MSE 168, Houston, TX, 77030, USA
| | - Khader M Hasan
- Department of Diagnostic and Interventional Imaging, University of Texas McGovern Medical School, 6431 Fannin St., MSE 168, Houston, TX, 77030, USA.
| |
Collapse
|
6
|
Zheng H, Liu X, Huang Z, Ren Y, Fu B, Shi T, Liu L, Guo Q, Tian C, Liang D, Wang R, Chen J, Hu Z. Deep learning for intracranial aneurysm segmentation using CT angiography. Phys Med Biol 2024; 69:155024. [PMID: 39008990 DOI: 10.1088/1361-6560/ad6372] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 07/15/2024] [Indexed: 07/17/2024]
Abstract
Objective.This study aimed to employ a two-stage deep learning method to accurately detect small aneurysms (4-10 mm in size) in computed tomography angiography images.Approach.This study included 956 patients from 6 hospitals and a public dataset obtained with 6 CT scanners from different manufacturers. The proposed method consists of two components: a lightweight and fast head region selection (HRS) algorithm and an adaptive 3D nnU-Net network, which is used as the main architecture for segmenting aneurysms. Segments generated by the deep neural network were compared with expert-generated manual segmentation results and assessed using Dice scores.MainResults.The area under the curve (AUC) exceeded 79% across all datasets. In particular, the precision and AUC reached 85.2% and 87.6%, respectively, on certain datasets. The experimental results demonstrated the promising performance of this approach, which reduced the inference time by more than 50% compared to direct inference without HRS.Significance.Compared with a model without HRS, the deep learning approach we developed can accurately segment aneurysms by automatically localizing brain regions and can accelerate aneurysm inference by more than 50%.
Collapse
Affiliation(s)
- Huizhong Zheng
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Xinfeng Liu
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang 550002, People's Republic of China
| | - Zhenxing Huang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Yan Ren
- AI for Science (AI4S)-Preferred Program, Peking University Shenzhen Graduate School, Shenzhen 518055, People's Republic of China
- School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School, Shenzhen 518005, People's Republic of China
| | - Bin Fu
- AI for Science (AI4S)-Preferred Program, Peking University Shenzhen Graduate School, Shenzhen 518055, People's Republic of China
- School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School, Shenzhen 518005, People's Republic of China
| | - Tianliang Shi
- Department of Radiology, Tongren Municipal People's Hospital, Tongren, Guizhou 554300, People's Republic of China
| | - Lu Liu
- Department of Radiology, The Second People's Hospital of Guiyang, Guiyang, Guizhou 550002, People's Republic of China
| | - Qiping Guo
- Department of Radiology, Xingyi Municipal People's Hospital, Xingyi, Guizhou 562400, People's Republic of China
| | - Chong Tian
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang 550002, People's Republic of China
| | - Dong Liang
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| | - Rongpin Wang
- Department of Radiology, Guizhou Provincial People's Hospital, Guiyang 550002, People's Republic of China
| | - Jie Chen
- AI for Science (AI4S)-Preferred Program, Peking University Shenzhen Graduate School, Shenzhen 518055, People's Republic of China
- School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School, Shenzhen 518005, People's Republic of China
- Peng Cheng Laboratory, Shenzhen 518005, People's Republic of China
| | - Zhanli Hu
- Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, People's Republic of China
| |
Collapse
|
7
|
Valverde S, Coll L, Valencia L, Clèrigues A, Oliver A, Vilanova JC, Ramió-Torrentà L, Rovira À, Lladó X. Assessing the Accuracy and Reproducibility of PARIETAL: A Deep Learning Brain Extraction Algorithm. J Magn Reson Imaging 2024; 59:1991-2000. [PMID: 34137113 DOI: 10.1002/jmri.27776] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 05/31/2021] [Accepted: 06/01/2021] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND Manual brain extraction from magnetic resonance (MR) images is time-consuming and prone to intra- and inter-rater variability. Several automated approaches have been developed to alleviate these constraints, including deep learning pipelines. However, these methods tend to reduce their performance in unseen magnetic resonance imaging (MRI) scanner vendors and different imaging protocols. PURPOSE To present and evaluate for clinical use PARIETAL, a pre-trained deep learning brain extraction method. We compare its reproducibility in a scan/rescan analysis and its robustness among scanners of different manufacturers. STUDY TYPE Retrospective. POPULATION Twenty-one subjects (12 women) with age range 22-48 years acquired using three different MRI scanner machines including scan/rescan in each of them. FIELD STRENGTH/SEQUENCE T1-weighted images acquired in a 3-T Siemens with magnetization prepared rapid gradient-echo sequence and two 1.5 T scanners, Philips and GE, with spin-echo and spoiled gradient-recalled (SPGR) sequences, respectively. ASSESSMENT Analysis of the intracranial cavity volumes obtained for each subject on the three different scanners and the scan/rescan acquisitions. STATISTICAL TESTS Parametric permutation tests of the differences in volumes to rank and statistically evaluate the performance of PARIETAL compared to state-of-the-art methods. RESULTS The mean absolute intracranial volume differences obtained by PARIETAL in the scan/rescan analysis were 1.88 mL, 3.91 mL, and 4.71 mL for Siemens, GE, and Philips scanners, respectively. PARIETAL was the best-ranked method on Siemens and GE scanners, while decreasing to Rank 2 on the Philips images. Intracranial differences for the same subject between scanners were 5.46 mL, 27.16 mL, and 30.44 mL for GE/Philips, Siemens/Philips, and Siemens/GE comparison, respectively. The permutation tests revealed that PARIETAL was always in Rank 1, obtaining the most similar volumetric results between scanners. DATA CONCLUSION PARIETAL accurately segments the brain and it generalizes to images acquired at different sites without the need of training or fine-tuning it again. PARIETAL is publicly available. LEVEL OF EVIDENCE 2 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Sergi Valverde
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Llucia Coll
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Liliana Valencia
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Albert Clèrigues
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Arnau Oliver
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
- REEM, Red Española de Esclerosis Múltiple
| | | | - Lluís Ramió-Torrentà
- REEM, Red Española de Esclerosis Múltiple
- Multiple Sclerosis and Neuroimmunology Unit, Neurology Department, Dr. Josep Trueta University Hospital, Institut d'Investigació Biomèdica, Girona, Spain
- Medical Sciences Department, University of Girona, Girona, Spain
| | - Àlex Rovira
- Magnetic Resonance Unit, Department of Radiology, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Xavier Lladó
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
- REEM, Red Española de Esclerosis Múltiple
| |
Collapse
|
8
|
Bai L, Wang D, Wang H, Barnett M, Cabezas M, Cai W, Calamante F, Kyle K, Liu D, Ly L, Nguyen A, Shieh CC, Sullivan R, Zhan G, Ouyang W, Wang C. Improving multiple sclerosis lesion segmentation across clinical sites: A federated learning approach with noise-resilient training. Artif Intell Med 2024; 152:102872. [PMID: 38701636 DOI: 10.1016/j.artmed.2024.102872] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2023] [Revised: 03/28/2024] [Accepted: 04/15/2024] [Indexed: 05/05/2024]
Abstract
Accurately measuring the evolution of Multiple Sclerosis (MS) with magnetic resonance imaging (MRI) critically informs understanding of disease progression and helps to direct therapeutic strategy. Deep learning models have shown promise for automatically segmenting MS lesions, but the scarcity of accurately annotated data hinders progress in this area. Obtaining sufficient data from a single clinical site is challenging and does not address the heterogeneous need for model robustness. Conversely, the collection of data from multiple sites introduces data privacy concerns and potential label noise due to varying annotation standards. To address this dilemma, we explore the use of the federated learning framework while considering label noise. Our approach enables collaboration among multiple clinical sites without compromising data privacy under a federated learning paradigm that incorporates a noise-robust training strategy based on label correction. Specifically, we introduce a Decoupled Hard Label Correction (DHLC) strategy that considers the imbalanced distribution and fuzzy boundaries of MS lesions, enabling the correction of false annotations based on prediction confidence. We also introduce a Centrally Enhanced Label Correction (CELC) strategy, which leverages the aggregated central model as a correction teacher for all sites, enhancing the reliability of the correction process. Extensive experiments conducted on two multi-site datasets demonstrate the effectiveness and robustness of our proposed methods, indicating their potential for clinical applications in multi-site collaborations to train better deep learning models with lower cost in data collection and annotation.
Collapse
Affiliation(s)
- Lei Bai
- Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; School of Electrical and Information Engineering, The University of Sydney, NSW 2006, Australia
| | - Dongang Wang
- Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia.
| | - Hengrui Wang
- Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia
| | - Michael Barnett
- Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia; Royal Prince Alfred Hospital, NSW, 2050, Australia
| | - Mariano Cabezas
- Brain and Mind Centre, The University of Sydney, NSW 2050, Australia
| | - Weidong Cai
- Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; School of Computer Science, The University of Sydney, NSW 2006, Australia
| | - Fernando Calamante
- Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; School of Biomedical Engineering, The University of Sydney, NSW 2006, Australia; Sydney Imaging, The University of Sydney, NSW 2006, Australia
| | - Kain Kyle
- Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia
| | - Dongnan Liu
- Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; School of Computer Science, The University of Sydney, NSW 2006, Australia
| | - Linda Ly
- Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia
| | - Aria Nguyen
- Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia
| | - Chun-Chien Shieh
- Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia
| | - Ryan Sullivan
- School of Biomedical Engineering, The University of Sydney, NSW 2006, Australia; Australian Imaging Service, NSW 2006, Australia
| | - Geng Zhan
- Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia
| | - Wanli Ouyang
- School of Electrical and Information Engineering, The University of Sydney, NSW 2006, Australia
| | - Chenyu Wang
- Brain and Mind Centre, The University of Sydney, NSW 2050, Australia; Sydney Neuroimaging Analysis Centre, 94 Mallett Street, NSW 2050, Australia.
| |
Collapse
|
9
|
Gangrade S, Sharma PC, Sharma AK, Singh YP. Modified DeeplabV3+ with multi-level context attention mechanism for colonoscopy polyp segmentation. Comput Biol Med 2024; 170:108096. [PMID: 38320340 DOI: 10.1016/j.compbiomed.2024.108096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Revised: 01/31/2024] [Accepted: 02/01/2024] [Indexed: 02/08/2024]
Abstract
The development of automated methods for analyzing medical images of colon cancer is one of the main research fields. A colonoscopy is a medical treatment that enables a doctor to look for any abnormalities like polyps, cancer, or inflammatory tissue inside the colon and rectum. It falls under the category of gastrointestinal illnesses, and it claims the lives of almost two million people worldwide. Video endoscopy is an advanced medical imaging approach to diagnose gastrointestinal disorders such as inflammatory bowel, ulcerative colitis, esophagitis, and polyps. Medical video endoscopy generates several images, which must be reviewed by specialists. The difficulty of manual diagnosis has sparked research towards computer-aided techniques that can quickly and reliably diagnose all generated images. The proposed methodology establishes a framework for diagnosing coloscopy diseases. Endoscopists can lower the risk of polyps turning into cancer during colonoscopies by using more accurate computer-assisted polyp detection and segmentation. With the aim of creating a model that can automatically distinguish polyps from images, we presented a modified DeeplabV3+ model in this study to carry out segmentation tasks successfully and efficiently. The framework's encoder uses a pre-trained dilated convolutional residual network for optimal feature map resolution. The robustness of the modified model is tested against state-of-the-art segmentation approaches. In this work, we employed two publicly available datasets, CVC-Clinic DB and Kvasir-SEG, and obtained Dice similarity coefficients of 0.97 and 0.95, respectively. The results show that the improved DeeplabV3+ model improves segmentation efficiency and effectiveness in both software and hardware with only minor changes.
Collapse
Affiliation(s)
- Shweta Gangrade
- School of Information Technology, Manipal University Jaipur, Jaipur, Rajasthan, India; School of Computer Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India
| | - Prakash Chandra Sharma
- School of Information Technology, Manipal University Jaipur, Jaipur, Rajasthan, India; School of Computer Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India
| | - Akhilesh Kumar Sharma
- School of Information Technology, Manipal University Jaipur, Jaipur, Rajasthan, India; School of Computer Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India
| | - Yadvendra Pratap Singh
- School of Information Technology, Manipal University Jaipur, Jaipur, Rajasthan, India; School of Computer Science and Engineering, Manipal University Jaipur, Jaipur, Rajasthan, India.
| |
Collapse
|
10
|
Silveira A, Greving I, Longo E, Scheel M, Weitkamp T, Fleck C, Shahar R, Zaslansky P. Deep learning to overcome Zernike phase-contrast nanoCT artifacts for automated micro-nano porosity segmentation in bone. JOURNAL OF SYNCHROTRON RADIATION 2024; 31:136-149. [PMID: 38095668 PMCID: PMC10833422 DOI: 10.1107/s1600577523009852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 11/13/2023] [Indexed: 01/09/2024]
Abstract
Bone material contains a hierarchical network of micro- and nano-cavities and channels, known as the lacuna-canalicular network (LCN), that is thought to play an important role in mechanobiology and turnover. The LCN comprises micrometer-sized lacunae, voids that house osteocytes, and submicrometer-sized canaliculi that connect bone cells. Characterization of this network in three dimensions is crucial for many bone studies. To quantify X-ray Zernike phase-contrast nanotomography data, deep learning is used to isolate and assess porosity in artifact-laden tomographies of zebrafish bones. A technical solution is proposed to overcome the halo and shade-off domains in order to reliably obtain the distribution and morphology of the LCN in the tomographic data. Convolutional neural network (CNN) models are utilized with increasing numbers of images, repeatedly validated by `error loss' and `accuracy' metrics. U-Net and Sensor3D CNN models were trained on data obtained from two different synchrotron Zernike phase-contrast transmission X-ray microscopes, the ANATOMIX beamline at SOLEIL (Paris, France) and the P05 beamline at PETRA III (Hamburg, Germany). The Sensor3D CNN model with a smaller batch size of 32 and a training data size of 70 images showed the best performance (accuracy 0.983 and error loss 0.032). The analysis procedures, validated by comparison with human-identified ground-truth images, correctly identified the voids within the bone matrix. This proposed approach may have further application to classify structures in volumetric images that contain non-linear artifacts that degrade image quality and hinder feature identification.
Collapse
Affiliation(s)
- Andreia Silveira
- Department for Restorative, Preventive and Pediatric Dentistry, Charité-Universitaetsmedizin, Berlin, Germany
| | - Imke Greving
- Institute of Materials Physics, Helmholtz-Zentrum Hereon, Geesthacht, Germany
| | - Elena Longo
- Elettra – Sincrotrone Trieste SCpA, Basovizza, Trieste, Italy
| | | | | | - Claudia Fleck
- Fachgebiet Werkstofftechnik / Chair of Materials Science and Engineering, Institute of Materials Science and Technology, Faculty III Process Sciences, Technische Universität Berlin, Berlin, Germany
| | - Ron Shahar
- Koret School of Veterinary Medicine, The Robert H. Smith Faculty of Agriculture, Food and Environmental Sciences, Hebrew University of Jerusalem, Rehovot, Israel
| | - Paul Zaslansky
- Department for Restorative, Preventive and Pediatric Dentistry, Charité-Universitaetsmedizin, Berlin, Germany
| |
Collapse
|
11
|
Ma J, Kong D, Wu F, Bao L, Yuan J, Liu Y. Densely connected convolutional networks for ultrasound image based lesion segmentation. Comput Biol Med 2024; 168:107725. [PMID: 38006827 DOI: 10.1016/j.compbiomed.2023.107725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 11/03/2023] [Accepted: 11/15/2023] [Indexed: 11/27/2023]
Abstract
Delineating lesion boundaries play a central role in diagnosing thyroid and breast cancers, making related therapy plans and evaluating therapeutic effects. However, it is often time-consuming and error-prone with limited reproducibility to manually annotate low-quality ultrasound (US) images, given high speckle noises, heterogeneous appearances, ambiguous boundaries etc., especially for nodular lesions with huge intra-class variance. It is hence appreciative but challenging for accurate lesion segmentations from US images in clinical practices. In this study, we propose a new densely connected convolutional network (called MDenseNet) architecture to automatically segment nodular lesions from 2D US images, which is first pre-trained over ImageNet database (called PMDenseNet) and then retrained upon the given US image datasets. Moreover, we also designed a deep MDenseNet with pre-training strategy (PDMDenseNet) for segmentation of thyroid and breast nodules by adding a dense block to increase the depth of our MDenseNet. Extensive experiments demonstrate that the proposed MDenseNet-based method can accurately extract multiple nodular lesions, with even complex shapes, from input thyroid and breast US images. Moreover, additional experiments show that the introduced MDenseNet-based method also outperforms three state-of-the-art convolutional neural networks in terms of accuracy and reproducibility. Meanwhile, promising results in nodular lesion segmentation from thyroid and breast US images illustrate its great potential in many other clinical segmentation tasks.
Collapse
Affiliation(s)
- Jinlian Ma
- School of Integrated Circuits, Shandong University, Jinan 250101, China; Shenzhen Research Institute of Shandong University, A301 Virtual University Park in South District of Shenzhen, China; State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China.
| | - Dexing Kong
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Fa Wu
- School of Mathematical Sciences, Zhejiang University, Hangzhou 310027, China
| | - Lingyun Bao
- Department of Ultrasound, Hangzhou First Peoples Hospital, Zhejiang University, Hangzhou, China
| | - Jing Yuan
- School of Mathematics and Statistics, Xidian University, China
| | - Yusheng Liu
- State Key Lab of CAD&CG, College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China
| |
Collapse
|
12
|
Zhou D, Xu L, Wang T, Wei S, Gao F, Lai X, Cao J. M-DDC: MRI based demyelinative diseases classification with U-Net segmentation and convolutional network. Neural Netw 2024; 169:108-119. [PMID: 37890361 DOI: 10.1016/j.neunet.2023.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 09/03/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Childhood demyelinative diseases classification (DDC) with brain magnetic resonance imaging (MRI) is crucial to clinical diagnosis. But few attentions have been paid to DDC in the past. How to accurately differentiate pediatric-onset neuromyelitis optica spectrum disorder (NMOSD) from acute disseminated encephalomyelitis (ADEM) based on MRI is challenging in DDC. In this paper, a novel architecture M-DDC based on joint U-Net segmentation network and deep convolutional network is developed. The U-Net segmentation can provide pixel-level structure information, that helps the lesion areas location and size estimation. The classification branch in DDC can detect the regions of interest inside MRIs, including the white matter regions where lesions appear. The performance of the proposed method is evaluated on MRIs of 201 subjects recorded from the Children's Hospital of Zhejiang University School of Medicine. The comparisons show that the proposed DDC achieves the highest accuracy of 99.19% and dice of 71.1% for ADEM and NMOSD classification and segmentation, respectively.
Collapse
Affiliation(s)
- Deyang Zhou
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China; HDU-ITMO Joint Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Lu Xu
- Department of Neurology, Children's Hospital, Zhejiang University School of Medicine, 310018, China.
| | - Tianlei Wang
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Shaonong Wei
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China; HDU-ITMO Joint Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Feng Gao
- Department of Neurology, Children's Hospital, Zhejiang University School of Medicine, 310018, China.
| | - Xiaoping Lai
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Jiuwen Cao
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| |
Collapse
|
13
|
Liu Y, Chen Z, Chen J, Shi Z, Fang G. Pathologic complete response prediction in breast cancer lesion segmentation and neoadjuvant therapy. Front Med (Lausanne) 2023; 10:1188207. [PMID: 38143443 PMCID: PMC10740372 DOI: 10.3389/fmed.2023.1188207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 11/08/2023] [Indexed: 12/26/2023] Open
Abstract
Objectives Predicting whether axillary lymph nodes could achieve pathologic Complete Response (pCR) after breast cancer patients receive neoadjuvant chemotherapy helps make a quick follow-up treatment plan. This paper presents a novel method to achieve this prediction with the most effective medical imaging method, Dynamic Contrast-enhanced Magnetic Resonance Imaging (DCE-MRI). Methods In order to get an accurate prediction, we first proposed a two-step lesion segmentation method to extract the breast cancer lesion region from DCE-MRI images. With the segmented breast cancer lesion region, we then used a multi-modal fusion model to predict the probability of axillary lymph nodes achieving pCR. Results We collected 361 breast cancer samples from two hospitals to train and test the proposed segmentation model and the multi-modal fusion model. Both segmentation and prediction models obtained high accuracy. Conclusion The results show that our method is effective in both the segmentation task and the pCR prediction task. It suggests that the presented methods, especially the multi-modal fusion model, can be used for the prediction of treatment response in breast cancer, given data from noninvasive methods only.
Collapse
Affiliation(s)
- Yue Liu
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, China
- School of Information Engineering, Jiangxi College of Applied Technology, Ganzhou, China
| | - Zhihong Chen
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Junhao Chen
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, China
| | - Zhenwei Shi
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
- Department of Radiology, Guangdong Provincial People’s Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
- Guangdong Cardiovascular Institute, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, China
| | - Gang Fang
- Institute of Computing Science and Technology, Guangzhou University, Guangzhou, China
| |
Collapse
|
14
|
Raab F, Malloni W, Wein S, Greenlee MW, Lang EW. Investigation of an efficient multi-modal convolutional neural network for multiple sclerosis lesion detection. Sci Rep 2023; 13:21154. [PMID: 38036638 PMCID: PMC10689724 DOI: 10.1038/s41598-023-48578-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 11/28/2023] [Indexed: 12/02/2023] Open
Abstract
In this study, an automated 2D machine learning approach for fast and precise segmentation of MS lesions from multi-modal magnetic resonance images (mmMRI) is presented. The method is based on an U-Net like convolutional neural network (CNN) for automated 2D slice-based-segmentation of brain MRI volumes. The individual modalities are encoded in separate downsampling branches without weight sharing, to leverage the specific features. Skip connections input feature maps to multi-scale feature fusion (MSFF) blocks at every decoder stage of the network. Those are followed by multi-scale feature upsampling (MSFU) blocks which use the information about lesion shape and location. The CNN is evaluated on two publicly available datasets: The ISBI 2015 longitudinal MS lesion segmentation challenge dataset containing 19 subjects and the MICCAI 2016 MSSEG challenge dataset containing 15 subjects from various scanners. The proposed multi-input 2D architecture is among the top performing approaches in the ISBI challenge, to which open-access papers are available, is able to outperform state-of-the-art 3D approaches without additional post-processing, can be adapted to other scanners quickly, is robust against scanner variability and can be deployed for inference even on a standard laptop without a dedicated GPU.
Collapse
Affiliation(s)
- Florian Raab
- Computational Intelligence and Machine Learning Group, University of Regensburg, 93051, Regensburg, Germany.
| | - Wilhelm Malloni
- Experimental Psychology, University of Regensburg, Regensburg, 93051, Germany
| | - Simon Wein
- Computational Intelligence and Machine Learning Group, University of Regensburg, 93051, Regensburg, Germany
- Experimental Psychology, University of Regensburg, Regensburg, 93051, Germany
| | - Mark W Greenlee
- Experimental Psychology, University of Regensburg, Regensburg, 93051, Germany
| | - Elmar W Lang
- Computational Intelligence and Machine Learning Group, University of Regensburg, 93051, Regensburg, Germany
| |
Collapse
|
15
|
Chen Y, Yu L, Wang JY, Panjwani N, Obeid JP, Liu W, Liu L, Kovalchuk N, Gensheimer MF, Vitzthum LK, Beadle BM, Chang DT, Le QT, Han B, Xing L. Adaptive Region-Specific Loss for Improved Medical Image Segmentation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:13408-13421. [PMID: 37363838 PMCID: PMC11346301 DOI: 10.1109/tpami.2023.3289667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
Defining the loss function is an important part of neural network design and critically determines the success of deep learning modeling. A significant shortcoming of the conventional loss functions is that they weight all regions in the input image volume equally, despite the fact that the system is known to be heterogeneous (i.e., some regions can achieve high prediction performance more easily than others). Here, we introduce a region-specific loss to lift the implicit assumption of homogeneous weighting for better learning. We divide the entire volume into multiple sub-regions, each with an individualized loss constructed for optimal local performance. Effectively, this scheme imposes higher weightings on the sub-regions that are more difficult to segment, and vice versa. Furthermore, the regional false positive and false negative errors are computed for each input image during a training step and the regional penalty is adjusted accordingly to enhance the overall accuracy of the prediction. Using different public and in-house medical image datasets, we demonstrate that the proposed regionally adaptive loss paradigm outperforms conventional methods in the multi-organ segmentations, without any modification to the neural network architecture or additional data preparation.
Collapse
|
16
|
Wahlig SG, Nedelec P, Weiss DA, Rudie JD, Sugrue LP, Rauschecker AM. 3D U-Net for automated detection of multiple sclerosis lesions: utility of transfer learning from other pathologies. Front Neurosci 2023; 17:1188336. [PMID: 37965219 PMCID: PMC10641790 DOI: 10.3389/fnins.2023.1188336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 09/26/2023] [Indexed: 11/16/2023] Open
Abstract
Background and purpose Deep learning algorithms for segmentation of multiple sclerosis (MS) plaques generally require training on large datasets. This manuscript evaluates the effect of transfer learning from segmentation of another pathology to facilitate use of smaller MS-specific training datasets. That is, a model trained for detection of one type of pathology was re-trained to identify MS lesions and active demyelination. Materials and methods In this retrospective study using MRI exams from 149 patients spanning 4/18/2014 to 7/8/2021, 3D convolutional neural networks were trained with a variable number of manually-segmented MS studies. Models were trained for FLAIR lesion segmentation at a single timepoint, new FLAIR lesion segmentation comparing two timepoints, and enhancing (actively demyelinating) lesion segmentation on T1 post-contrast imaging. Models were trained either de-novo or fine-tuned with transfer learning applied to a pre-existing model initially trained on non-MS data. Performance was evaluated with lesionwise sensitivity and positive predictive value (PPV). Results For single timepoint FLAIR lesion segmentation with 10 training studies, a fine-tuned model demonstrated improved performance [lesionwise sensitivity 0.55 ± 0.02 (mean ± standard error), PPV 0.66 ± 0.02] compared to a de-novo model (sensitivity 0.49 ± 0.02, p = 0.001; PPV 0.32 ± 0.02, p < 0.001). For new lesion segmentation with 30 training studies and their prior comparisons, a fine-tuned model demonstrated similar sensitivity (0.49 ± 0.05) and significantly improved PPV (0.60 ± 0.05) compared to a de-novo model (sensitivity 0.51 ± 0.04, p = 0.437; PPV 0.43 ± 0.04, p = 0.002). For enhancement segmentation with 20 training studies, a fine-tuned model demonstrated significantly improved overall performance (sensitivity 0.74 ± 0.06, PPV 0.69 ± 0.05) compared to a de-novo model (sensitivity 0.44 ± 0.09, p = 0.001; PPV 0.37 ± 0.05, p = 0.001). Conclusion By fine-tuning models trained for other disease pathologies with MS-specific data, competitive models identifying existing MS plaques, new MS plaques, and active demyelination can be built with substantially smaller datasets than would otherwise be required to train new models.
Collapse
Affiliation(s)
- Stephen G. Wahlig
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Pierre Nedelec
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - David A. Weiss
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Jeffrey D. Rudie
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
- Department of Radiology, University of California, San Diego, San Diego, CA, United States
| | - Leo P. Sugrue
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| | - Andreas M. Rauschecker
- Center for Intelligent Imaging (ci), Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, CA, United States
| |
Collapse
|
17
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Nuklearmedizin 2023; 62:306-313. [PMID: 37802058 DOI: 10.1055/a-2157-6670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/08/2023]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET..
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
18
|
Vázquez-Marrufo M, Sarrias-Arrabal E, García-Torres M, Martín-Clemente R, Izquierdo G. A systematic review of the application of machine-learning algorithms in multiple sclerosis. Neurologia 2023; 38:577-590. [PMID: 35843587 DOI: 10.1016/j.nrleng.2020.10.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Accepted: 10/11/2020] [Indexed: 10/17/2022] Open
Abstract
INTRODUCTION The applications of artificial intelligence, and in particular automatic learning or "machine learning" (ML), constitute both a challenge and a great opportunity in numerous scientific, technical, and clinical disciplines. Specific applications in the study of multiple sclerosis (MS) have been no exception, and constitute an area of increasing interest in recent years. OBJECTIVE We present a systematic review of the application of ML algorithms in MS. MATERIALS AND METHODS We used the PubMed search engine, which allows free access to the MEDLINE medical database, to identify studies including the keywords "machine learning" and "multiple sclerosis." We excluded review articles, studies written in languages other than English or Spanish, and studies that were mainly technical and did not specifically apply to MS. The final selection included 76 articles, and 38 were rejected. CONCLUSIONS After the review process, we established 4 main applications of ML in MS: 1) classifying MS subtypes; 2) distinguishing patients with MS from healthy controls and individuals with other diseases; 3) predicting progression and response to therapeutic interventions; and 4) other applications. Results found to date have shown that ML algorithms may offer great support for health professionals both in clinical settings and in research into MS.
Collapse
Affiliation(s)
- M Vázquez-Marrufo
- Departamento de Psicología Experimental, Facultad de Psicología, Universidad de Sevilla, Sevilla, Spain.
| | - E Sarrias-Arrabal
- Departamento de Psicología Experimental, Facultad de Psicología, Universidad de Sevilla, Sevilla, Spain
| | - M García-Torres
- Escuela Politécnica Superior, Universidad Pablo de Olavide, Sevilla, Spain
| | - R Martín-Clemente
- Departamento de Teoría de la Señal y Comunicaciones, Escuela Técnica Superior de Ingeniería, Universidad de Sevilla, Sevilla, Spain
| | - G Izquierdo
- Unidad de Esclerosis Múltiple, Hospital VITHAS, Sevilla, Spain
| |
Collapse
|
19
|
Riaz Z, Khan B, Abdullah S, Khan S, Islam MS. Lung Tumor Image Segmentation from Computer Tomography Images Using MobileNetV2 and Transfer Learning. Bioengineering (Basel) 2023; 10:981. [PMID: 37627866 PMCID: PMC10451633 DOI: 10.3390/bioengineering10080981] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Revised: 08/14/2023] [Accepted: 08/17/2023] [Indexed: 08/27/2023] Open
Abstract
BACKGROUND Lung cancer is one of the most fatal cancers worldwide, and malignant tumors are characterized by the growth of abnormal cells in the tissues of lungs. Usually, symptoms of lung cancer do not appear until it is already at an advanced stage. The proper segmentation of cancerous lesions in CT images is the primary method of detection towards achieving a completely automated diagnostic system. METHOD In this work, we developed an improved hybrid neural network via the fusion of two architectures, MobileNetV2 and UNET, for the semantic segmentation of malignant lung tumors from CT images. The transfer learning technique was employed and the pre-trained MobileNetV2 was utilized as an encoder of a conventional UNET model for feature extraction. The proposed network is an efficient segmentation approach that performs lightweight filtering to reduce computation and pointwise convolution for building more features. Skip connections were established with the Relu activation function for improving model convergence to connect the encoder layers of MobileNetv2 to decoder layers in UNET that allow the concatenation of feature maps with different resolutions from the encoder to decoder. Furthermore, the model was trained and fine-tuned on the training dataset acquired from the Medical Segmentation Decathlon (MSD) 2018 Challenge. RESULTS The proposed network was tested and evaluated on 25% of the dataset obtained from the MSD, and it achieved a dice score of 0.8793, recall of 0.8602 and precision of 0.93. It is pertinent to mention that our technique outperforms the current available networks, which have several phases of training and testing.
Collapse
Affiliation(s)
- Zainab Riaz
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
| | - Bangul Khan
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
- Department of Biomedical Engineering, City University Hongkong, Hong Kong SAR, China
| | - Saad Abdullah
- Division of Intelligent Future Technologies, School of Innovation, Design and Engineering, Mälardalen University, P.O. Box 883, 721 23 Västerås, Sweden
| | - Samiullah Khan
- Center for Eye & Vision Research, 17W Science Park, Hong Kong SAR, China;
| | - Md Shohidul Islam
- Hong Kong Center for Cerebro-Cardiovascular Health Engineering (COCHE), Hong Kong SAR, China; (Z.R.); (B.K.); (M.S.I.)
| |
Collapse
|
20
|
Spagnolo F, Depeursinge A, Schädelin S, Akbulut A, Müller H, Barakovic M, Melie-Garcia L, Bach Cuadra M, Granziera C. How far MS lesion detection and segmentation are integrated into the clinical workflow? A systematic review. Neuroimage Clin 2023; 39:103491. [PMID: 37659189 PMCID: PMC10480555 DOI: 10.1016/j.nicl.2023.103491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 08/04/2023] [Indexed: 09/04/2023]
Abstract
INTRODUCTION Over the past few years, the deep learning community has developed and validated a plethora of tools for lesion detection and segmentation in Multiple Sclerosis (MS). However, there is an important gap between validating models technically and clinically. To this end, a six-step framework necessary for the development, validation, and integration of quantitative tools in the clinic was recently proposed under the name of the Quantitative Neuroradiology Initiative (QNI). AIMS Investigate to what extent automatic tools in MS fulfill the QNI framework necessary to integrate automated detection and segmentation into the clinical neuroradiology workflow. METHODS Adopting the systematic Cochrane literature review methodology, we screened and summarised published scientific articles that perform automatic MS lesions detection and segmentation. We categorised the retrieved studies based on their degree of fulfillment of QNI's six-steps, which include a tool's technical assessment, clinical validation, and integration. RESULTS We found 156 studies; 146/156 (94%) fullfilled the first QNI step, 155/156 (99%) the second, 8/156 (5%) the third, 3/156 (2%) the fourth, 5/156 (3%) the fifth and only one the sixth. CONCLUSIONS To date, little has been done to evaluate the clinical performance and the integration in the clinical workflow of available methods for MS lesion detection/segmentation. In addition, the socio-economic effects and the impact on patients' management of such tools remain almost unexplored.
Collapse
Affiliation(s)
- Federico Spagnolo
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Department of Neurology, University Hospital Basel, Basel, Switzerland; Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland; MedGIFT, Institute of Informatics, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland
| | - Adrien Depeursinge
- MedGIFT, Institute of Informatics, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland; Nuclear Medicine and Molecular Imaging Department, Lausanne University Hospital (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Sabine Schädelin
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Clinical Trial Unit, Department of Clinical Research, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Aysenur Akbulut
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Ankara University School of Medicine, Ankara, Turkey
| | - Henning Müller
- MedGIFT, Institute of Informatics, School of Management, HES-SO Valais-Wallis University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland; The Sense Research and Innovation Center, Lausanne and Sion, Switzerland
| | - Muhamed Barakovic
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Department of Neurology, University Hospital Basel, Basel, Switzerland; Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland
| | - Lester Melie-Garcia
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Department of Neurology, University Hospital Basel, Basel, Switzerland; Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland
| | - Meritxell Bach Cuadra
- CIBM Center for Biomedical Imaging, Lausanne, Switzerland; Radiology Department, Lausanne University Hospital (CHUV) and University of Lausanne, Lausanne, Switzerland
| | - Cristina Granziera
- Translational Imaging in Neurology (ThINK) Basel, Department of Biomedical Engineering, Faculty of Medicine, University Hospital Basel and University of Basel, Basel, Switzerland; Department of Neurology, University Hospital Basel, Basel, Switzerland; Research Center for Clinical Neuroimmunology and Neuroscience Basel (RC2NB), University Hospital Basel and University of Basel, Basel, Switzerland.
| |
Collapse
|
21
|
Zargari A, Mashhadi N, Shariati SA. Enhanced cell segmentation with limited annotated data using generative adversarial networks. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.26.550715. [PMID: 37546774 PMCID: PMC10402092 DOI: 10.1101/2023.07.26.550715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
The application of deep learning is rapidly transforming the field of bioimage analysis. While deep learning has shown great promise in complex microscopy tasks such as single-cell segmentation, the development of generalizable foundation deep learning segmentation models is hampered by the scarcity of large and diverse annotated datasets of cell images for training purposes. Generative Adversarial Networks (GANs) can generate realistic images that can potentially be easily used to train deep learning models without the generation of large manually annotated microscopy images. Here, we propose a customized CycleGAN architecture to train an enhanced cell segmentation model with limited annotated cell images, effectively addressing the challenge of paucity of annotated data in microscopy imaging. Our customized CycleGAN model can generate realistic synthetic images of cells with morphological details and nuances very similar to that of real images. This method not only increases the variability seen during training but also enhances the authenticity of synthetic samples, thereby enhancing the overall predictive accuracy and robustness of the cell segmentation model. Our experimental results show that our CycleGAN-based method significantly improves the performance of the segmentation model compared to conventional training techniques. Interestingly, we demonstrate that our model can extrapolate its knowledge by synthesizing imaging scenarios that were not seen during the training process. Our proposed customized CycleGAN method will accelerate the development of foundation models for cell segmentation in microscopy images.
Collapse
Affiliation(s)
- Abolfazl Zargari
- Department of Electrical and Computer Engineering, University of California, Santa Cruz, CA, USA
- Lead contact
| | - Najmeh Mashhadi
- Department of Computer Science and Engineering, University of California, Santa Cruz, CA, USA
| | - S. Ali Shariati
- Department of Biomolecular Engineering, University of California, Santa Cruz, CA, USA
- Institute for The Biology of Stem Cells, University of California, Santa Cruz, CA, USA
- Genomics Institute, University of California, Santa Cruz, CA, USA
- Lead contact
| |
Collapse
|
22
|
Chen QQ, Sun ZH, Wei CF, Wu EQ, Ming D. Semi-Supervised 3D Medical Image Segmentation Based on Dual-Task Consistent Joint Learning and Task-Level Regularization. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2457-2467. [PMID: 35061590 DOI: 10.1109/tcbb.2022.3144428] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Semi-supervised learning has attracted wide attention from many researchers since its ability to utilize a few data with labels and relatively more data without labels to learn information. Some existing semi-supervised methods for medical image segmentation enforce the regularization of training by implicitly perturbing data or networks to perform the consistency. Most consistency regularization methods focus on data level or network structure level, and rarely of them focus on the task level. It may not directly lead to an improvement in task accuracy. To overcome the problem, this work proposes a semi-supervised dual-task consistent joint learning framework with task-level regularization for 3D medical image segmentation. Two branches are utilized to simultaneously predict the segmented and signed distance maps, and they can learn useful information from each other by constructing a consistency loss function between the two tasks. The segmentation branch learns rich information from both labeled and unlabeled data to strengthen the constraints on the geometric structure of the target. Experimental results on two benchmark datasets show that the proposed method can achieve better performance compared with other state-of-the-art works. It illustrates our method improves segmentation performance by utilizing unlabeled data and consistent regularization.
Collapse
|
23
|
Evaluation of prostate multi parameter bone structures for martial arts practitioners based on magnetic resonance imaging. JOURNAL OF RADIATION RESEARCH AND APPLIED SCIENCES 2023. [DOI: 10.1016/j.jrras.2023.100549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
|
24
|
Liu D, Cabezas M, Wang D, Tang Z, Bai L, Zhan G, Luo Y, Kyle K, Ly L, Yu J, Shieh CC, Nguyen A, Kandasamy Karuppiah E, Sullivan R, Calamante F, Barnett M, Ouyang W, Cai W, Wang C. Multiple sclerosis lesion segmentation: revisiting weighting mechanisms for federated learning. Front Neurosci 2023; 17:1167612. [PMID: 37274196 PMCID: PMC10232857 DOI: 10.3389/fnins.2023.1167612] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 04/24/2023] [Indexed: 06/06/2023] Open
Abstract
Background and introduction Federated learning (FL) has been widely employed for medical image analysis to facilitate multi-client collaborative learning without sharing raw data. Despite great success, FL's applications remain suboptimal in neuroimage analysis tasks such as lesion segmentation in multiple sclerosis (MS), due to variance in lesion characteristics imparted by different scanners and acquisition parameters. Methods In this work, we propose the first FL MS lesion segmentation framework via two effective re-weighting mechanisms. Specifically, a learnable weight is assigned to each local node during the aggregation process, based on its segmentation performance. In addition, the segmentation loss function in each client is also re-weighted according to the lesion volume for the data during training. Results The proposed method has been validated on two FL MS segmentation scenarios using public and clinical datasets. Specifically, the case-wise and voxel-wise Dice score of the proposed method under the first public dataset is 65.20 and 74.30, respectively. On the second in-house dataset, the case-wise and voxel-wise Dice score is 53.66, and 62.31, respectively. Discussions and conclusions The Comparison experiments on two FL MS segmentation scenarios using public and clinical datasets have demonstrated the effectiveness of the proposed method by significantly outperforming other FL methods. Furthermore, the segmentation performance of FL incorporating our proposed aggregation mechanism can achieve comparable performance to that from centralized training with all the raw data.
Collapse
Affiliation(s)
- Dongnan Liu
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
| | - Mariano Cabezas
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
| | - Dongang Wang
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| | - Zihao Tang
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
| | - Lei Bai
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- School of Electrical and Information Engineering, The University of Sydney, Sydney, NSW, Australia
| | - Geng Zhan
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| | - Yuling Luo
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| | - Kain Kyle
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| | - Linda Ly
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| | - James Yu
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| | - Chun-Chien Shieh
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| | - Aria Nguyen
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| | | | - Ryan Sullivan
- School of Biomedical Engineering, The University of Sydney, Sydney, NSW, Australia
| | - Fernando Calamante
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- School of Biomedical Engineering, The University of Sydney, Sydney, NSW, Australia
- Sydney Imaging, The University of Sydney, Sydney, NSW, Australia
| | - Michael Barnett
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| | - Wanli Ouyang
- School of Electrical and Information Engineering, The University of Sydney, Sydney, NSW, Australia
| | - Weidong Cai
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
| | - Chenyu Wang
- Brain and Mind Centre, The University of Sydney, Sydney, NSW, Australia
- Sydney Neuroimaging Analysis Centre, Camperdown, NSW, Australia
| |
Collapse
|
25
|
An EffcientNet-encoder U-Net Joint Residual Refinement Module with Tversky–Kahneman Baroni–Urbani–Buser loss for biomedical image Segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
26
|
Montazerolghaem M, Sun Y, Sasso G, Haworth A. U-Net Architecture for Prostate Segmentation: The Impact of Loss Function on System Performance. Bioengineering (Basel) 2023; 10:bioengineering10040412. [PMID: 37106600 PMCID: PMC10135670 DOI: 10.3390/bioengineering10040412] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Revised: 03/19/2023] [Accepted: 03/23/2023] [Indexed: 03/29/2023] Open
Abstract
Segmentation of the prostate gland from magnetic resonance images is rapidly becoming a standard of care in prostate cancer radiotherapy treatment planning. Automating this process has the potential to improve accuracy and efficiency. However, the performance and accuracy of deep learning models varies depending on the design and optimal tuning of the hyper-parameters. In this study, we examine the effect of loss functions on the performance of deep-learning-based prostate segmentation models. A U-Net model for prostate segmentation using T2-weighted images from a local dataset was trained and performance compared when using nine different loss functions, including: Binary Cross-Entropy (BCE), Intersection over Union (IoU), Dice, BCE and Dice (BCE + Dice), weighted BCE and Dice (W (BCE + Dice)), Focal, Tversky, Focal Tversky, and Surface loss functions. Model outputs were compared using several metrics on a five-fold cross-validation set. Ranking of model performance was found to be dependent on the metric used to measure performance, but in general, W (BCE + Dice) and Focal Tversky performed well for all metrics (whole gland Dice similarity coefficient (DSC): 0.71 and 0.74; 95HD: 6.66 and 7.42; Ravid 0.05 and 0.18, respectively) and Surface loss generally ranked lowest (DSC: 0.40; 95HD: 13.64; Ravid −0.09). When comparing the performance of the models for the mid-gland, apex, and base parts of the prostate gland, the models’ performance was lower for the apex and base compared to the mid-gland. In conclusion, we have demonstrated that the performance of a deep learning model for prostate segmentation can be affected by choice of loss function. For prostate segmentation, it would appear that compound loss functions generally outperform singles loss functions such as Surface loss.
Collapse
|
27
|
Huang F, Xia P, Vardhanabhuti V, Hui S, Lau K, Ka‐Fung Mak H, Cao P. Semisupervised white matter hyperintensities segmentation on MRI. Hum Brain Mapp 2023; 44:1344-1358. [PMID: 36214210 PMCID: PMC9921214 DOI: 10.1002/hbm.26109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2021] [Revised: 08/25/2022] [Accepted: 09/07/2022] [Indexed: 11/10/2022] Open
Abstract
This study proposed a semisupervised loss function named level-set loss (LSLoss) for cerebral white matter hyperintensities (WMHs) segmentation on fluid-attenuated inversion recovery images. The training procedure did not require manually labeled WMH masks. Our image preprocessing steps included biased field correction, skull stripping, and white matter segmentation. With the proposed LSLoss, we trained a V-Net using the MRI images from both local and public databases. Local databases were the small vessel disease cohort (HKU-SVD, n = 360) and the multiple sclerosis cohort (HKU-MS, n = 20) from our institutional imaging center. Public databases were the Medical Image Computing Computer-assisted Intervention (MICCAI) WMH challenge database (MICCAI-WMH, n = 60) and the normal control cohort of the Alzheimer's Disease Neuroimaging Initiative database (ADNI-CN, n = 15). We achieved an overall dice similarity coefficient (DSC) of 0.81 on the HKU-SVD testing set (n = 20), DSC = 0.77 on the HKU-MS testing set (n = 5), and DSC = 0.78 on MICCAI-WMH testing set (n = 30). The segmentation results obtained by our semisupervised V-Net were comparable with the supervised methods and outperformed the unsupervised methods in the literature.
Collapse
Affiliation(s)
- Fan Huang
- Department of Diagnostic Radiology, LKS Faculty of MedicineThe University of Hong KongHong KongChina
| | - Peng Xia
- Department of Diagnostic Radiology, LKS Faculty of MedicineThe University of Hong KongHong KongChina
| | - Varut Vardhanabhuti
- Department of Diagnostic Radiology, LKS Faculty of MedicineThe University of Hong KongHong KongChina
| | - Sai‐Kam Hui
- Department of Rehabilitation ScienceThe Hong Kong Polytechnic UniversityHong KongChina
| | - Kui‐Kai Lau
- Department of Medicine, LKS Faculty of MedicineThe University of Hong KongHong KongChina
- The State Key Laboratory of Brain and Cognitive SciencesThe University of Hong KongHong KongChina
| | - Henry Ka‐Fung Mak
- Department of Diagnostic Radiology, LKS Faculty of MedicineThe University of Hong KongHong KongChina
| | - Peng Cao
- Department of Diagnostic Radiology, LKS Faculty of MedicineThe University of Hong KongHong KongChina
| |
Collapse
|
28
|
Mahmud BU, Hong GY, Mamun AA, Ping EP, Wu Q. Deep Learning-Based Segmentation of 3D Volumetric Image and Microstructural Analysis. SENSORS (BASEL, SWITZERLAND) 2023; 23:2640. [PMID: 36904845 PMCID: PMC10007404 DOI: 10.3390/s23052640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2023] [Revised: 02/16/2023] [Accepted: 02/18/2023] [Indexed: 06/18/2023]
Abstract
As a fundamental but difficult topic in computer vision, 3D object segmentation has various applications in medical image analysis, autonomous vehicles, robotics, virtual reality, lithium battery image analysis, etc. In the past, 3D segmentation was performed using hand-made features and design techniques, but these techniques could not generalize to vast amounts of data or reach acceptable accuracy. Deep learning techniques have lately emerged as the preferred method for 3D segmentation jobs as a result of their extraordinary performance in 2D computer vision. Our proposed method used a CNN-based architecture called 3D UNET, which is inspired by the famous 2D UNET that has been used to segment volumetric image data. To see the internal changes of composite materials, for instance, in a lithium battery image, it is necessary to see the flow of different materials and follow the directions analyzing the inside properties. In this paper, a combination of 3D UNET and VGG19 has been used to conduct a multiclass segmentation of publicly available sandstone datasets to analyze their microstructures using image data based on four different objects in the samples of volumetric data. In our image sample, there are a total of 448 2D images, which are then aggregated as one 3D volume to examine the 3D volumetric data. The solution involves the segmentation of each object in the volume data and further analysis of each object to find its average size, area percentage, total area, etc. The open-source image processing package IMAGEJ is used for further analysis of individual particles. In this study, it was demonstrated that convolutional neural networks can be trained to recognize sandstone microstructure traits with an accuracy of 96.78% and an IOU of 91.12%. According to our knowledge, many prior works have applied 3D UNET for segmentation, but very few papers extend it further to show the details of particles in the sample. The proposed solution offers a computational insight for real-time implementation and is discovered to be superior to the current state-of-the-art methods. The result has importance for the creation of an approximately similar model for the microstructural analysis of volumetric data.
Collapse
Affiliation(s)
- Bahar Uddin Mahmud
- Department of Computer Science, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Guan Yue Hong
- Department of Computer Science, Western Michigan University, Kalamazoo, MI 49008, USA
| | - Abdullah Al Mamun
- Faculty Engineering and Technology, Multimedia University, Melaka 75450, Malaysia
| | - Em Poh Ping
- Faculty Engineering and Technology, Multimedia University, Melaka 75450, Malaysia
| | - Qingliu Wu
- Department of Chemical and Paper Engineering, Western Michigan University, Kalamazoo, MI 49008, USA
| |
Collapse
|
29
|
Sarica B, Seker DZ, Bayram B. A dense residual U-net for multiple sclerosis lesions segmentation from multi-sequence 3D MR images. Int J Med Inform 2023; 170:104965. [PMID: 36580821 DOI: 10.1016/j.ijmedinf.2022.104965] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 12/08/2022] [Indexed: 12/28/2022]
Abstract
Multiple Sclerosis (MS) is an autoimmune disease that causes brain and spinal cord lesions, which magnetic resonance imaging (MRI) can detect and characterize. Recently, deep learning methods have achieved remarkable results in the automated segmentation of MS lesions from MRI data. Hence, this study proposes a novel dense residual U-Net model that combines attention gate (AG), efficient channel attention (ECA), and Atrous Spatial Pyramid Pooling (ASPP) to enhance the performance of the automatic MS lesion segmentation using 3D MRI sequences. First, convolution layers in each block of the U-Net architecture are replaced by residual blocks and connected densely. Then, AGs are exploited to capture salient features passed through the skip connections. The ECA module is appended at the end of each residual block and each downsampling block of U-Net. Later, the bottleneck of U-Net is replaced with the ASSP module to extract multi-scale contextual information. Furthermore, 3D MR images of Fluid Attenuated Inversion Recovery (FLAIR), T1-weighted (T1-w), and T2-weighted (T2-w) are exploited jointly to perform better MS lesion segmentation. The proposed model is validated on the publicly available ISBI2015 and MSSEG2016 challenge datasets. This model produced an ISBI score of 92.75, a mean Dice score of 66.88%, a mean positive predictive value (PPV) of 86.50%, and a mean lesion-wise true positive rate (LTPR) of 60.64% on the ISBI2015 testing set. Also, it achieved a mean Dice score of 67.27%, a mean PPV of 65.19%, and a mean sensitivity of 74.40% on the MSSEG2016 testing set. The results show that the proposed model performs better than the results of some experts and some of the other state-of-the-art methods realized related to this particular subject. Specifically, the best Dice score and the best LTPR are obtained on the ISBI2015 testing set by using the proposed model to segment MS lesions.
Collapse
Affiliation(s)
- Beytullah Sarica
- Istanbul Technical University, Graduate School, Department of Applied Informatics, Istanbul, 34469, Turkey.
| | - Dursun Zafer Seker
- Istanbul Technical University, Civil Engineering Faculty, Department of Geomatics Engineering, Istanbul, 34469, Turkey.
| | - Bulent Bayram
- Yildiz Technical University, Civil Engineering Faculty, Department of Geomatics Engineering, Istanbul, 34220, Turkey.
| |
Collapse
|
30
|
Diakou I, Papakonstantinou E, Papageorgiou L, Pierouli K, Dragoumani K, Spandidos DA, Bacopoulou F, Chrousos GP, Goulielmos GΝ, Eliopoulos E, Vlachakis D. Multiple sclerosis and computational biology (Review). Biomed Rep 2022; 17:96. [PMID: 36382258 PMCID: PMC9634047 DOI: 10.3892/br.2022.1579] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Accepted: 09/27/2022] [Indexed: 12/02/2022] Open
Abstract
Multiple sclerosis (MS) is an autoimmune neurodegenerative disease whose prevalence has increased worldwide. The resultant symptoms may be debilitating and can substantially reduce the of patients. Computational biology, which involves the use of computational tools to answer biomedical questions, may provide the basis for novel healthcare approaches in the context of MS. The rapid accumulation of health data, and the ever-increasing computational power and evolving technology have helped to modernize and refine MS research. From the discovery of novel biomarkers to the optimization of treatment and a number of quality-of-life enhancements for patients, computational biology methods and tools are shaping the field of MS diagnosis, management and treatment. The final goal in such a complex disease would be personalized medicine, i.e., providing healthcare services that are tailored to the individual patient, in accordance to the particular biology of their disease and the environmental factors to which they are subjected. The present review article summarizes the current knowledge on MS, modern computational biology and the impact of modern computational approaches of MS.
Collapse
Affiliation(s)
- Io Diakou
- Laboratory of Genetics, Department of Biotechnology, School of Applied Biology and Biotechnology, Agricultural University of Athens, 11855 Athens, Greece
| | - Eleni Papakonstantinou
- Laboratory of Genetics, Department of Biotechnology, School of Applied Biology and Biotechnology, Agricultural University of Athens, 11855 Athens, Greece
| | - Louis Papageorgiou
- Laboratory of Genetics, Department of Biotechnology, School of Applied Biology and Biotechnology, Agricultural University of Athens, 11855 Athens, Greece
| | - Katerina Pierouli
- Laboratory of Genetics, Department of Biotechnology, School of Applied Biology and Biotechnology, Agricultural University of Athens, 11855 Athens, Greece
| | - Konstantina Dragoumani
- Laboratory of Genetics, Department of Biotechnology, School of Applied Biology and Biotechnology, Agricultural University of Athens, 11855 Athens, Greece
| | - Demetrios A. Spandidos
- Laboratory of Clinical Virology, School of Medicine, University of Crete, 71003 Heraklion, Greece
| | - Flora Bacopoulou
- University Research Institute of Maternal and Child Health and Precision Medicine, and UNESCO Chair on Adolescent Health Care, National and Kapodistrian University of Athens, ‘Aghia Sophia’ Children's Hospital, 11527 Athens, Greece
| | - George P. Chrousos
- University Research Institute of Maternal and Child Health and Precision Medicine, and UNESCO Chair on Adolescent Health Care, National and Kapodistrian University of Athens, ‘Aghia Sophia’ Children's Hospital, 11527 Athens, Greece
| | - Georges Ν. Goulielmos
- Section of Molecular Pathology and Human Genetics, Department of Internal Medicine, School of Medicine, University of Crete, 71003 Heraklion, Greece
| | - Elias Eliopoulos
- Laboratory of Genetics, Department of Biotechnology, School of Applied Biology and Biotechnology, Agricultural University of Athens, 11855 Athens, Greece
| | - Dimitrios Vlachakis
- Laboratory of Genetics, Department of Biotechnology, School of Applied Biology and Biotechnology, Agricultural University of Athens, 11855 Athens, Greece
- University Research Institute of Maternal and Child Health and Precision Medicine, and UNESCO Chair on Adolescent Health Care, National and Kapodistrian University of Athens, ‘Aghia Sophia’ Children's Hospital, 11527 Athens, Greece
- Division of Endocrinology and Metabolism, Center of Clinical, Experimental Surgery and Translational Research, Biomedical Research Foundation of The Academy of Athens, 11527 Athens, Greece
| |
Collapse
|
31
|
Gao H, Miao Q, Ma D, Liua R. Deep Mutual Learning for Brain Tumor Segmentation with the Fusion Network. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.11.038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
32
|
Medley DO, Santiago C, Nascimento JC. CyCoSeg: A Cyclic Collaborative Framework for Automated Medical Image Segmentation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:8167-8182. [PMID: 34529562 DOI: 10.1109/tpami.2021.3113077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Deep neural networks have been tremendously successful at segmenting objects in images. However, it has been shown they still have limitations on challenging problems such as the segmentation of medical images. The main reason behind this lower success resides in the reduced size of the object in the image. In this paper we overcome this limitation through a cyclic collaborative framework, CyCoSeg. The proposed framework is based on a deep active shape model (D-ASM), which provides prior information about the shape of the object, and a semantic segmentation network (SSN). These two models collaborate to reach the desired segmentation by influencing each other: SSN helps D-ASM identify relevant keypoints in the image through an Expectation Maximization formulation, while D-ASM provides a segmentation proposal that guides the SSN. This cycle is repeated until both models converge. Extensive experimental evaluation shows CyCoSeg boosts the performance of the baseline models, including several popular SSNs, while avoiding major architectural modifications. The effectiveness of our method is demonstrated on the left ventricle segmentation on two benchmark datasets, where our approach achieves one of the most competitive results in segmentation accuracy. Furthermore, its generalization is demonstrated for lungs and kidneys segmentation in CT scans.
Collapse
|
33
|
Scatigno C, Festa G. Neutron Imaging and Learning Algorithms: New Perspectives in Cultural Heritage Applications. J Imaging 2022; 8:jimaging8100284. [PMID: 36286378 PMCID: PMC9605401 DOI: 10.3390/jimaging8100284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 10/04/2022] [Accepted: 10/12/2022] [Indexed: 11/06/2022] Open
Abstract
Recently, learning algorithms such as Convolutional Neural Networks have been successfully applied in different stages of data processing from the acquisition to the data analysis in the imaging context. The aim of these algorithms is the dimensionality of data reduction and the computational effort, to find benchmarks and extract features, to improve the resolution, and reproducibility performances of the imaging data. Currently, no Neutron Imaging combined with learning algorithms was applied on cultural heritage domain, but future applications could help to solve challenges of this research field. Here, a review of pioneering works to exploit the use of Machine Learning and Deep Learning models applied to X-ray imaging and Neutron Imaging data processing is reported, spanning from biomedicine, microbiology, and materials science to give new perspectives on future cultural heritage applications.
Collapse
|
34
|
Khezrpour S, Seyedarabi H, Razavi SN, Farhoudi M. Automatic segmentation of the brain stroke lesions from MR flair scans using improved U-net framework. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103978] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
35
|
Küstner T, Vogel J, Hepp T, Forschner A, Pfannenberg C, Schmidt H, Schwenzer NF, Nikolaou K, la Fougère C, Seith F. Development of a Hybrid-Imaging-Based Prognostic Index for Metastasized-Melanoma Patients in Whole-Body 18F-FDG PET/CT and PET/MRI Data. Diagnostics (Basel) 2022; 12:2102. [PMID: 36140504 PMCID: PMC9498091 DOI: 10.3390/diagnostics12092102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 08/19/2022] [Accepted: 08/25/2022] [Indexed: 11/17/2022] Open
Abstract
Besides tremendous treatment success in advanced melanoma patients, the rapid development of oncologic treatment options comes with increasingly high costs and can cause severe life-threatening side effects. For this purpose, predictive baseline biomarkers are becoming increasingly important for risk stratification and personalized treatment planning. Thus, the aim of this pilot study was the development of a prognostic tool for the risk stratification of the treatment response and mortality based on PET/MRI and PET/CT, including a convolutional neural network (CNN) for metastasized-melanoma patients before systemic-treatment initiation. The evaluation was based on 37 patients (19 f, 62 ± 13 y/o) with unresectable metastasized melanomas who underwent whole-body 18F-FDG PET/MRI and PET/CT scans on the same day before the initiation of therapy with checkpoint inhibitors and/or BRAF/MEK inhibitors. The overall survival (OS), therapy response, metastatically involved organs, number of lesions, total lesion glycolysis, total metabolic tumor volume (TMTV), peak standardized uptake value (SULpeak), diameter (Dmlesion) and mean apparent diffusion coefficient (ADCmean) were assessed. For each marker, a Kaplan−Meier analysis and the statistical significance (Wilcoxon test, paired t-test and Bonferroni correction) were assessed. Patients were divided into high- and low-risk groups depending on the OS and treatment response. The CNN segmentation and prediction utilized multimodality imaging data for a complementary in-depth risk analysis per patient. The following parameters correlated with longer OS: a TMTV < 50 mL; no metastases in the brain, bone, liver, spleen or pleura; ≤4 affected organ regions; no metastases; a Dmlesion > 37 mm or SULpeak < 1.3; a range of the ADCmean < 600 mm2/s. However, none of the parameters correlated significantly with the stratification of the patients into the high- or low-risk groups. For the CNN, the sensitivity, specificity, PPV and accuracy were 92%, 96%, 92% and 95%, respectively. Imaging biomarkers such as the metastatic involvement of specific organs, a high tumor burden, the presence of at least one large lesion or a high range of intermetastatic diffusivity were negative predictors for the OS, but the identification of high-risk patients was not feasible with the handcrafted parameters. In contrast, the proposed CNN supplied risk stratification with high specificity and sensitivity.
Collapse
Affiliation(s)
- Thomas Küstner
- MIDAS.Lab, Department of Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Jonas Vogel
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, University Hospital Tübingen, 72076 Tubingen, Germany
| | - Tobias Hepp
- MIDAS.Lab, Department of Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Andrea Forschner
- Department of Dermatology, University Hospital of Tübingen, 72070 Tubingen, Germany
| | - Christina Pfannenberg
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| | - Holger Schmidt
- Faculty of Medicine, Eberhard-Karls-University Tübingen, 72076 Tubingen, Germany
- Siemens Healthineers, 91052 Erlangen, Germany
| | - Nina F. Schwenzer
- Faculty of Medicine, Eberhard-Karls-University Tübingen, 72076 Tubingen, Germany
| | - Konstantin Nikolaou
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
- Cluster of Excellence iFIT (EXC 2180) Image-Guided and Functionally Instructed Tumor Therapies, Eberhard Karls University, 72076 Tubingen, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Tübingen, 72076 Tubingen, Germany
| | - Christian la Fougère
- Nuclear Medicine and Clinical Molecular Imaging, Department of Radiology, University Hospital Tübingen, 72076 Tubingen, Germany
- Cluster of Excellence iFIT (EXC 2180) Image-Guided and Functionally Instructed Tumor Therapies, Eberhard Karls University, 72076 Tubingen, Germany
- German Cancer Consortium (DKTK), German Cancer Research Center (DKFZ), Partner Site Tübingen, 72076 Tubingen, Germany
| | - Ferdinand Seith
- Department of Radiology, Diagnostic and Interventional Radiology, University Hospital of Tübingen, 72076 Tubingen, Germany
| |
Collapse
|
36
|
Karsaz A. A modified convolutional neural network architecture for diabetic retinopathy screening using SVDD. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109102] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
37
|
MobileUNetV3—A Combined UNet and MobileNetV3 Architecture for Spinal Cord Gray Matter Segmentation. ELECTRONICS 2022. [DOI: 10.3390/electronics11152388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
The inspection of gray matter (GM) tissue of the human spinal cord is a valuable tool for the diagnosis of a wide range of neurological disorders. Thus, the detection and segmentation of GM regions in magnetic resonance images (MRIs) is an important task when studying the spinal cord and its related medical conditions. This work proposes a new method for the segmentation of GM tissue in spinal cord MRIs based on deep convolutional neural network (CNNs) techniques. Our proposed method, called MobileUNetV3, has a UNet-like architecture, with the MobileNetV3 model being used as a pre-trained encoder. MobileNetV3 is light-weight and yields high accuracy compared with many other CNN architectures of similar size. It is composed of a series of blocks, which produce feature maps optimized using residual connections and squeeze-and-excitation modules. We carefully added a set of upsampling layers and skip connections to MobileNetV3 in order to build an effective UNet-like model for image segmentation. To illustrate the capabilities of the proposed method, we tested it on the spinal cord gray matter segmentation challenge dataset and compared it to a number of recent state-of-the-art methods. We obtained results that outperformed seven methods with respect to five evaluation metrics comprising the dice similarity coefficient (0.87), Jaccard index (0.78), sensitivity (87.20%), specificity (99.90%), and precision (87.96%). Based on these highly competitive results, MobileUNetV3 is an effective deep-learning model for the segmentation of GM MRIs in the spinal cord.
Collapse
|
38
|
Zhu W, Huang H, Zhou Y, Shi F, Shen H, Chen R, Hua R, Wang W, Xu S, Luo X. Automatic segmentation of white matter hyperintensities in routine clinical brain MRI by 2D VB-Net: A large-scale study. Front Aging Neurosci 2022; 14:915009. [PMID: 35966772 PMCID: PMC9372352 DOI: 10.3389/fnagi.2022.915009] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 07/14/2022] [Indexed: 11/13/2022] Open
Abstract
White matter hyperintensities (WMH) are imaging manifestations frequently observed in various neurological disorders, yet the clinical application of WMH quantification is limited. In this study, we designed a series of dedicated WMH labeling protocols and proposed a convolutional neural network named 2D VB-Net for the segmentation of WMH and other coexisting intracranial lesions based on a large dataset of 1,045 subjects across various demographics and multiple scanners using 2D thick-slice protocols that are more commonly applied in clinical practice. Using our labeling pipeline, the Dice consistency of the WMH regions manually depicted by two observers was 0.878, which formed a solid basis for the development and evaluation of the automatic segmentation system. The proposed algorithm outperformed other state-of-the-art methods (uResNet, 3D V-Net and Visual Geometry Group network) in the segmentation of WMH and other coexisting intracranial lesions and was well validated on datasets with thick-slice magnetic resonance (MR) images and the 2017 medical image computing and computer assisted intervention WMH Segmentation Challenge dataset (with thin-slice MR images), all showing excellent effectiveness. Furthermore, our method can subclassify WMH to display the WMH distributions and is very lightweight. Additionally, in terms of correlation to visual rating scores, our algorithm showed excellent consistency with the manual delineations and was overall better than those from other competing methods. In conclusion, we developed an automatic WMH quantification framework for multiple application scenarios, exhibiting a promising future in clinical practice.
Collapse
Affiliation(s)
- Wenhao Zhu
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Hao Huang
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yaqi Zhou
- Shanghai United Imaging Intelligence, Wuhan, China
| | - Feng Shi
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Hong Shen
- Shanghai United Imaging Intelligence, Wuhan, China
| | - Ran Chen
- Shanghai United Imaging Intelligence, Wuhan, China
| | - Rui Hua
- Shanghai United Imaging Intelligence, Shanghai, China
| | - Wei Wang
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Shabei Xu
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Xiang Luo
- Department of Neurology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
39
|
Sarica B, Seker DZ. New MS lesion segmentation with deep residual attention gate U-Net utilizing 2D slices of 3D MR images. Front Neurosci 2022; 16:912000. [PMID: 35968389 PMCID: PMC9365701 DOI: 10.3389/fnins.2022.912000] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Accepted: 06/27/2022] [Indexed: 11/13/2022] Open
Abstract
Multiple sclerosis (MS) is an autoimmune disease that causes lesions in the central nervous system of humans due to demyelinating axons. Magnetic resonance imaging (MRI) is widely used for monitoring and measuring MS lesions. Automated methods for MS lesion segmentation have usually been performed on individual MRI scans. Recently, tracking lesion activity for quantifying and monitoring MS disease progression, especially detecting new lesions, has become an important biomarker. In this study, a unique pipeline with a deep neural network that combines U-Net, attention gate, and residual learning is proposed to perform better new MS lesion segmentation using baseline and follow-up 3D FLAIR MR images. The proposed network has a similar architecture to U-Net and is formed from residual units which facilitate the training of deep networks. Networks with fewer parameters are designed with better performance through the skip connections of U-Net and residual units, which facilitate information propagation without degradation. Attention gates also learn to focus on salient features of the target structures of various sizes and shapes. The MSSEG-2 dataset was used for training and testing the proposed pipeline, and the results were compared with those of other proposed pipelines of the challenge and experts who participated in the same challenge. According to the results over the testing set, the lesion-wise F1 and dice scores were obtained as a mean of 48 and 44.30%. For the no-lesion cases, the number of tested and volume of tested lesions were obtained as a mean of 0.148 and 1.488, respectively. The proposed pipeline outperformed 22 proposed pipelines and ranked 8th in the challenge.
Collapse
Affiliation(s)
- Beytullah Sarica
- Department of Applied Informatics, Graduate School, Istanbul Technical University, Istanbul, Turkey
| | - Dursun Zafer Seker
- Department of Geomatics Engineering, Faculty of Civil Engineering, Istanbul Technical University, Istanbul, Turkey
| |
Collapse
|
40
|
Abstract
AbstractBrain tumor segmentation is one of the most challenging problems in medical image analysis. The goal of brain tumor segmentation is to generate accurate delineation of brain tumor regions. In recent years, deep learning methods have shown promising performance in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of deep learning based methods have been applied to brain tumor segmentation and achieved promising results. Considering the remarkable breakthroughs made by state-of-the-art technologies, we provide this survey with a comprehensive study of recently developed deep learning based brain tumor segmentation techniques. More than 150 scientific papers are selected and discussed in this survey, extensively covering technical aspects such as network architecture design, segmentation under imbalanced conditions, and multi-modality processes. We also provide insightful discussions for future development directions.
Collapse
|
41
|
Minaee S, Boykov Y, Porikli F, Plaza A, Kehtarnavaz N, Terzopoulos D. Image Segmentation Using Deep Learning: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:3523-3542. [PMID: 33596172 DOI: 10.1109/tpami.2021.3059968] [Citation(s) in RCA: 380] [Impact Index Per Article: 126.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Image segmentation is a key task in computer vision and image processing with important applications such as scene understanding, medical image analysis, robotic perception, video surveillance, augmented reality, and image compression, among others, and numerous segmentation algorithms are found in the literature. Against this backdrop, the broad success of deep learning (DL) has prompted the development of new image segmentation approaches leveraging DL models. We provide a comprehensive review of this recent literature, covering the spectrum of pioneering efforts in semantic and instance segmentation, including convolutional pixel-labeling networks, encoder-decoder architectures, multiscale and pyramid-based approaches, recurrent networks, visual attention models, and generative models in adversarial settings. We investigate the relationships, strengths, and challenges of these DL-based segmentation models, examine the widely used datasets, compare performances, and discuss promising research directions.
Collapse
|
42
|
Khodadadi Shoushtari F, Sina S, Dehkordi ANV. Automatic segmentation of glioblastoma multiform brain tumor in MRI images: Using Deeplabv3+ with pre-trained Resnet18 weights. Phys Med 2022; 100:51-63. [PMID: 35732092 DOI: 10.1016/j.ejmp.2022.06.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 06/05/2022] [Accepted: 06/11/2022] [Indexed: 10/17/2022] Open
Abstract
PURPOSE To assess the effectiveness of deep learning algorithms in automated segmentation of magnetic resonance brain images for determining the enhanced tumor, the peri-tumoral edema, the necrotic/ non-enhancing tumor, and Normal tissue volumes. METHODS AND MATERIALS A new deep neural network algorithm, Deep-Net, was developed for semantic segmentation of the glioblastoma tumors in MR images, using the Deeplabv3+ architecture, and the pre-trained Resnet18 initial weights. The MR image Dataset used for training the network was taken from the BraTS 2020 training set, with the ground truth labels for different tumor subregions manually drawn by a group of expert neuroradiologists. In this work, two multi-modal MRI scans, i.e., T1ce and FLAIR of 293 patients with high-grade glioma (HGG), were used for deep network training (Deep-Net). The performance of the network was assessed for different hyper-parameters, to obtain the optimum set of parameters. The similarity scores were used for the evaluation of the optimized network. RESULTS According to the results of this study, epoch #37 is the optimum epoch giving the best global accuracy (97.53%), and loss function (0.14). The Deep-Net sensitivity in the delineation of the enhanced tumor is more than 90%. CONCLUSIONS The results indicate that the Deep-Net was able to segment GBM tumors with high accuracy.
Collapse
Affiliation(s)
| | - Sedigheh Sina
- Nuclear Engineering Department, Shiraz University, Shiraz, Iran; Radiation Research Center, Shiraz University, Shiraz, Iran
| | - Azimeh N V Dehkordi
- Department of Physics, Najafabad Branch, Islamic Azad University, Najafabad, Iran.
| |
Collapse
|
43
|
Fu H, Wang G, Lei W, Xu W, Zhao Q, Zhang S, Li K, Zhang S. HMRNet: High-and-Multi- Resolution Network With Bidirectional Feature Calibration for Brain Structure Segmentation in Radiotherapy. IEEE J Biomed Health Inform 2022; 26:4519-4529. [PMID: 35687645 DOI: 10.1109/jbhi.2022.3181462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
Abstract
Accurate segmentation of Anatomical brain Barriers to Cancer spread (ABCs) plays an important role for automatic delineation of Clinical Target Volume (CTV) of brain tumors in radiotherapy. Despite that variants of U-Net are state-of-the-art segmentation models, they have limited performance when dealing with ABCs structures with various shapes and sizes, especially thin structures (e.g., the falx cerebri) that span only few slices. To deal with this problem, we propose a High-and-Multi Resolution Network (HMRNet) that consists of a multi-scale feature learning branch and a high-resolution branch, which can maintain the high-resolution contextual information and extract more robust representations of anatomical structures with various scales. We further design a Bidirectional Feature Calibration (BFC) block to enable the two branches to generate spatial attention maps for mutual feature calibration. Considering the different sizes and positions of ABCs structures, our network was applied after a rough localization of each structure to obtain fine segmentation results. Experiments on the MICCAI 2020 ABCs challenge dataset showed that: 1) Our proposed two-stage segmentation strategy largely outperformed methods segmenting all the structures in just one stage; 2) The proposed HMRNet with two branches can maintain high-resolution representations and is effective to improve the performance on thin structures; 3) The proposed BFC block outperformed existing attention methods using monodirectional feature calibration. Our method won the second place of ABCs 2020 challenge and has a potential for more accurate and reasonable delineation of CTV of brain tumors.
Collapse
|
44
|
Sadeghibakhi M, Pourreza H, Mahyar H. Multiple Sclerosis Lesions Segmentation Using Attention-Based CNNs in FLAIR Images. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2022; 10:1800411. [PMID: 35711337 PMCID: PMC9191687 DOI: 10.1109/jtehm.2022.3172025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 03/05/2022] [Accepted: 04/08/2022] [Indexed: 11/17/2022]
Abstract
Objective: Multiple Sclerosis (MS) is an autoimmune and demyelinating disease that leads to lesions in the central nervous system. This disease can be tracked and diagnosed using Magnetic Resonance Imaging (MRI). A multitude of multimodality automatic biomedical approaches are used to segment lesions that are not beneficial for patients in terms of cost, time, and usability. The authors of the present paper propose a method employing just one modality (FLAIR image) to segment MS lesions accurately. Methods: A patch-based Convolutional Neural Network (CNN) is designed, inspired by 3D-ResNet and spatial-channel attention module, to segment MS lesions. The proposed method consists of three stages: (1) the Contrast-Limited Adaptive Histogram Equalization (CLAHE) is applied to the original images and concatenated to the extracted edges to create 4D images; (2) the patches of size [Formula: see text] are randomly selected from the 4D images; and (3) the extracted patches are passed into an attention-based CNN which is used to segment the lesions. Finally, the proposed method was compared to previous studies of the same dataset. Results: The current study evaluates the model with a test set of ISIB challenge data. Experimental results illustrate that the proposed approach significantly surpasses existing methods of Dice similarity and Absolute Volume Difference while the proposed method uses just one modality (FLAIR) to segment the lesions. Conclusion: The authors have introduced an automated approach to segment the lesions, which is based on, at most, two modalities as an input. The proposed architecture comprises convolution, deconvolution, and an SCA-VoxRes module as an attention module. The results show, that the proposed method outperforms well compared to other methods.
Collapse
Affiliation(s)
- Mehdi Sadeghibakhi
- MV LaboratoryDepartment of Computer Engineering, Faculty of EngineeringFerdowsi University of MashhadMashhad9177948974Iran
| | - Hamidreza Pourreza
- MV LaboratoryDepartment of Computer Engineering, Faculty of EngineeringFerdowsi University of MashhadMashhad9177948974Iran
| | - Hamidreza Mahyar
- Faculty of Engineering, W Booth School of Engineering Practice and TechnologyMcMaster UniversityHamiltonONL8S 4L8Canada
| |
Collapse
|
45
|
Balwant M. A Review on Convolutional Neural Networks for Brain Tumor Segmentation: Methods, Datasets, Libraries, and Future Directions. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
46
|
Bi K, Wang S. Unsupervised domain adaptation with hyperbolic graph convolution network for segmentation of X-ray breast mass. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-202630] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Deep learning has been widely used in medical image segmentation, such as breast tumor segmentation, prostate MR image segmentation, and so on. However, the labeling of the data set takes a lot of time. Although the emergence of unsupervised domain adaptation fills the technical gap, the existing domain adaptation methods for breast segmentation do not consider the alignment of the source domain and target domain breast mass structure. This paper proposes a hyperbolic graph convolutional network architecture. First, a hyperbolic graph convolutional network is used to make the source and target domains structurally aligned. Secondly, we adopt a hyperbolic space mapping model that has better expressive ability than Euclidean space in a graph structure. In particular, when constructing the graph structure, we added the completion adjacency matrix, so that the graph structure can be changed after each feature mapping, which can better improve the segmentation accuracy. Extensive comparative and ablation experiments were performed on two common breast datasets(CBIS-DDSM and INbreast). Experiments show that the method in this paper is better than the most advanced model. When CBIS-DDSM and INbreast are used as the source domain, the segmentation accuracy reaches 89.1% and 80.7%.
Collapse
Affiliation(s)
- Kai Bi
- College of Software Engineering, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China
| | - ShengSheng Wang
- College of Computer Science and Technology, Jilin University, Changchun, China
- Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, China
| |
Collapse
|
47
|
Fully Automatic Analysis of Muscle B-Mode Ultrasound Images Based on the Deep Residual Shrinkage U-Net. ELECTRONICS 2022. [DOI: 10.3390/electronics11071093] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
The parameters of muscle ultrasound images reflect the function and state of muscles. They are of great significance to the diagnosis of muscle diseases. Because manual labeling is time-consuming and laborious, the automatic labeling of muscle ultrasound image parameters has become a research topic. In recent years, there have been many methods that apply image processing and deep learning to automatically analyze muscle ultrasound images. However, these methods have limitations, such as being non-automatic, not applicable to images with complex noise, and only being able to measure a single parameter. This paper proposes a fully automatic muscle ultrasound image analysis method based on image segmentation to solve these problems. This method is based on the Deep Residual Shrinkage U-Net(RS-Unet) to accurately segment ultrasound images. Compared with the existing methods, the accuracy of our method shows a great improvement. The mean differences of pennation angle, fascicle length and muscle thickness are about 0.09°, 0.4 mm and 0.63 mm, respectively. Experimental results show that the proposed method realizes the accurate measurement of muscle parameters and exhibits stability and robustness.
Collapse
|
48
|
Deep 3D Neural Network for Brain Structures Segmentation Using Self-Attention Modules in MRI Images. SENSORS 2022; 22:s22072559. [PMID: 35408173 PMCID: PMC9002763 DOI: 10.3390/s22072559] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/07/2022] [Revised: 03/15/2022] [Accepted: 03/21/2022] [Indexed: 01/03/2023]
Abstract
In recent years, the use of deep learning-based models for developing advanced healthcare systems has been growing due to the results they can achieve. However, the majority of the proposed deep learning-models largely use convolutional and pooling operations, causing a loss in valuable data and focusing on local information. In this paper, we propose a deep learning-based approach that uses global and local features which are of importance in the medical image segmentation process. In order to train the architecture, we used extracted three-dimensional (3D) blocks from the full magnetic resonance image resolution, which were sent through a set of successive convolutional neural network (CNN) layers free of pooling operations to extract local information. Later, we sent the resulting feature maps to successive layers of self-attention modules to obtain the global context, whose output was later dispatched to the decoder pipeline composed mostly of upsampling layers. The model was trained using the Mindboggle-101 dataset. The experimental results showed that the self-attention modules allow segmentation with a higher Mean Dice Score of 0.90 ± 0.036 compared with other UNet-based approaches. The average segmentation time was approximately 0.038 s per brain structure. The proposed model allows tackling the brain structure segmentation task properly. Exploiting the global context that the self-attention modules incorporate allows for more precise and faster segmentation. We segmented 37 brain structures and, to the best of our knowledge, it is the largest number of structures under a 3D approach using attention mechanisms.
Collapse
|
49
|
Zhu X, Wu Y, Hu H, Zhuang X, Yao J, Ou D, Li W, Song M, Feng N, Xu D. Medical lesion segmentation by combining multi‐modal images with modality weighted UNet. Med Phys 2022; 49:3692-3704. [PMID: 35312077 DOI: 10.1002/mp.15610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 02/25/2022] [Accepted: 03/04/2022] [Indexed: 11/09/2022] Open
Affiliation(s)
- Xiner Zhu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Yichao Wu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Haoji Hu
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Xianwei Zhuang
- College of Information Science and Electronic Engineering Zhejiang University Hangzhou China
| | - Jincao Yao
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Di Ou
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Wei Li
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Mei Song
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Na Feng
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| | - Dong Xu
- Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital) Hangzhou China
- Institute of Basic Medicine and Cancer (IBMC) Chinese Academy of Sciences Hangzhou China
| |
Collapse
|
50
|
Kursad Poyraz A, Dogan S, Akbal E, Tuncer T. Automated brain disease classification using exemplar deep features. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103448] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|