1
|
Martinez-Murcia FJ, Arco JE, Jimenez-Mesa C, Segovia F, Illan IA, Ramirez J, Gorriz JM. Bridging Imaging and Clinical Scores in Parkinson's Progression via Multimodal Self-Supervised Deep Learning. Int J Neural Syst 2024; 34:2450043. [PMID: 38770651 DOI: 10.1142/s0129065724500436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
Neurodegenerative diseases pose a formidable challenge to medical research, demanding a nuanced understanding of their progressive nature. In this regard, latent generative models can effectively be used in a data-driven modeling of different dimensions of neurodegeneration, framed within the context of the manifold hypothesis. This paper proposes a joint framework for a multi-modal, common latent generative model to address the need for a more comprehensive understanding of the neurodegenerative landscape in the context of Parkinson's disease (PD). The proposed architecture uses coupled variational autoencoders (VAEs) to joint model a common latent space to both neuroimaging and clinical data from the Parkinson's Progression Markers Initiative (PPMI). Alternative loss functions, different normalization procedures, and the interpretability and explainability of latent generative models are addressed, leading to a model that was able to predict clinical symptomatology in the test set, as measured by the unified Parkinson's disease rating scale (UPDRS), with R2 up to 0.86 for same-modality and 0.441 cross-modality (using solely neuroimaging). The findings provide a foundation for further advancements in the field of clinical research and practice, with potential applications in decision-making processes for PD. The study also highlights the limitations and capabilities of the proposed model, emphasizing its direct interpretability and potential impact on understanding and interpreting neuroimaging patterns associated with PD symptomatology.
Collapse
Affiliation(s)
- Francisco J Martinez-Murcia
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Center for Advanced Studies, Ludwig-Maximilien Universität München, München, Germany
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Juan Eloy Arco
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Carmen Jimenez-Mesa
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Fermin Segovia
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Ignacio A Illan
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Javier Ramirez
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| | - Juan Manuel Gorriz
- Department of Signal Processing, Networking and Communications, University of Granada, Granada, Spain
- Center for Advanced Studies, Ludwig-Maximilien Universität München, München, Germany
- Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI), University of Granada, Granada, Spain
| |
Collapse
|
2
|
Coll L, Pareto D, Carbonell-Mirabent P, Cobo-Calvo Á, Arrambide G, Vidal-Jordana Á, Comabella M, Castilló J, Rodrı Guez-Acevedo B, Zabalza A, Galán I, Midaglia L, Nos C, Auger C, Alberich M, Río J, Sastre-Garriga J, Oliver A, Montalban X, Rovira À, Tintoré M, Lladó X, Tur C. Global and Regional Deep Learning Models for Multiple Sclerosis Stratification From MRI. J Magn Reson Imaging 2024; 60:258-267. [PMID: 37803817 DOI: 10.1002/jmri.29046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 09/15/2023] [Accepted: 09/18/2023] [Indexed: 10/08/2023] Open
Abstract
BACKGROUND The combination of anatomical MRI and deep learning-based methods such as convolutional neural networks (CNNs) is a promising strategy to build predictive models of multiple sclerosis (MS) prognosis. However, studies assessing the effect of different input strategies on model's performance are lacking. PURPOSE To compare whole-brain input sampling strategies and regional/specific-tissue strategies, which focus on a priori known relevant areas for disability accrual, to stratify MS patients based on their disability level. STUDY TYPE Retrospective. SUBJECTS Three hundred nineteen MS patients (382 brain MRI scans) with clinical assessment of disability level performed within the following 6 months (~70% training/~15% validation/~15% inference in-house dataset) and 440 MS patients from multiple centers (independent external validation cohort). FIELD STRENGTH/SEQUENCE Single vendor 1.5 T or 3.0 T. Magnetization-Prepared Rapid Gradient-Echo and Fluid-Attenuated Inversion Recovery sequences. ASSESSMENT A 7-fold patient cross validation strategy was used to train a 3D-CNN to classify patients into two groups, Expanded Disability Status Scale score (EDSS) ≥ 3.0 or EDSS < 3.0. Two strategies were investigated: 1) a global approach, taking the whole brain volume as input and 2) regional approaches using five different regions-of-interest: white matter, gray matter, subcortical gray matter, ventricles, and brainstem structures. The performance of the models was assessed in the in-house and the independent external cohorts. STATISTICAL TESTS Balanced accuracy, sensitivity, specificity, area under receiver operating characteristic (ROC) curve (AUC). RESULTS With the in-house dataset, the gray matter regional model showed the highest stratification accuracy (81%), followed by the global approach (79%). In the external dataset, without any further retraining, an accuracy of 72% was achieved for the white matter model and 71% for the global approach. DATA CONCLUSION The global approach offered the best trade-off between internal performance and external validation to stratify MS patients based on accumulated disability. EVIDENCE LEVEL 4 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Llucia Coll
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Deborah Pareto
- Section of Neuroradiology, Department of Radiology, Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Pere Carbonell-Mirabent
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Álvaro Cobo-Calvo
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Georgina Arrambide
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Ángela Vidal-Jordana
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Manuel Comabella
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Joaquín Castilló
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Breogán Rodrı Guez-Acevedo
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Ana Zabalza
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Ingrid Galán
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Luciana Midaglia
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Carlos Nos
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Cristina Auger
- Section of Neuroradiology, Department of Radiology, Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Manel Alberich
- Section of Neuroradiology, Department of Radiology, Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Jordi Río
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Jaume Sastre-Garriga
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Arnau Oliver
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Xavier Montalban
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Àlex Rovira
- Section of Neuroradiology, Department of Radiology, Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Mar Tintoré
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| | - Xavier Lladó
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Carmen Tur
- Multiple Sclerosis Centre of Catalonia (Cemcat), Hospital Universitari Vall d'Hebron, Universitat Autònoma de Barcelona, Barcelona, Spain
| |
Collapse
|
3
|
Krüger J, Opfer R, Spies L, Hedderich D, Buchert R. Voxel-based morphometry in single subjects without a scanner-specific normal database using a convolutional neural network. Eur Radiol 2024; 34:3578-3587. [PMID: 37943313 DOI: 10.1007/s00330-023-10356-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2023] [Revised: 08/19/2023] [Accepted: 08/26/2023] [Indexed: 11/10/2023]
Abstract
OBJECTIVES Reliable detection of disease-specific atrophy in individual T1w-MRI by voxel-based morphometry (VBM) requires scanner-specific normal databases (NDB), which often are not available. The aim of this retrospective study was to design, train, and test a deep convolutional neural network (CNN) for single-subject VBM without the need for a NDB (CNN-VBM). MATERIALS AND METHODS The training dataset comprised 8945 T1w scans from 65 different scanners. The gold standard VBM maps were obtained by conventional VBM with a scanner-specific NDB for each of the 65 scanners. CNN-VBM was tested in an independent dataset comprising healthy controls (n = 37) and subjects with Alzheimer's disease (AD, n = 51) or frontotemporal lobar degeneration (FTLD, n = 30). A scanner-specific NDB for the generation of the gold standard VBM maps was available also for the test set. The technical performance of CNN-VBM was characterized by the Dice coefficient of CNN-VBM maps relative to VBM maps from scanner-specific VBM. For clinical testing, VBM maps were categorized visually according to the clinical diagnoses in the test set by two independent readers, separately for both VBM methods. RESULTS The VBM maps from CNN-VBM were similar to the scanner-specific VBM maps (median Dice coefficient 0.85, interquartile range [0.81, 0.90]). Overall accuracy of the visual categorization of the VBM maps for the detection of AD or FTLD was 89.8% for CNN-VBM and 89.0% for scanner-specific VBM. CONCLUSION CNN-VBM without NDB provides a similar performance in the detection of AD- and FTLD-specific atrophy as conventional VBM. CLINICAL RELEVANCE STATEMENT A deep convolutional neural network for voxel-based morphometry eliminates the need of scanner-specific normal databases without relevant performance loss and, therefore, could pave the way for the widespread clinical use of voxel-based morphometry to support the diagnosis of neurodegenerative diseases. KEY POINTS • The need of normal databases is a barrier for widespread use of voxel-based brain morphometry. • A convolutional neural network achieved a similar performance for detection of atrophy than conventional voxel-based morphometry. • Convolutional neural networks can pave the way for widespread clinical use of voxel-based morphometry.
Collapse
Affiliation(s)
| | | | | | - Dennis Hedderich
- Department of Neuroradiology, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Ralph Buchert
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246, Hamburg, Germany.
| |
Collapse
|
4
|
Zhang X, Tian L, Guo S, Liu Y. STF-Net: sparsification transformer coding guided network for subcortical brain structure segmentation. BIOMED ENG-BIOMED TE 2024; 0:bmt-2023-0121. [PMID: 38712825 DOI: 10.1515/bmt-2023-0121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 04/15/2024] [Indexed: 05/08/2024]
Abstract
Subcortical brain structure segmentation plays an important role in the diagnosis of neuroimaging and has become the basis of computer-aided diagnosis. Due to the blurred boundaries and complex shapes of subcortical brain structures, labeling these structures by hand becomes a time-consuming and subjective task, greatly limiting their potential for clinical applications. Thus, this paper proposes the sparsification transformer (STF) module for accurate brain structure segmentation. The self-attention mechanism is used to establish global dependencies to efficiently extract the global information of the feature map with low computational complexity. Also, the shallow network is used to compensate for low-level detail information through the localization of convolutional operations to promote the representation capability of the network. In addition, a hybrid residual dilated convolution (HRDC) module is introduced at the bottom layer of the network to extend the receptive field and extract multi-scale contextual information. Meanwhile, the octave convolution edge feature extraction (OCT) module is applied at the skip connections of the network to pay more attention to the edge features of brain structures. The proposed network is trained with a hybrid loss function. The experimental evaluation on two public datasets: IBSR and MALC, shows outstanding performance in terms of objective and subjective quality.
Collapse
Affiliation(s)
- Xiufeng Zhang
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| | - Lingzhuo Tian
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| | - Shengjin Guo
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| | - Yansong Liu
- School of Mechanical and Electrical Engineering, 66455 Dalian Minzu University , Dalian, Liaoning, China
| |
Collapse
|
5
|
Patel K, Xie Z, Yuan H, Islam SMS, Xie Y, He W, Zhang W, Gottlieb A, Chen H, Giancardo L, Knaack A, Fletcher E, Fornage M, Ji S, Zhi D. Unsupervised deep representation learning enables phenotype discovery for genetic association studies of brain imaging. Commun Biol 2024; 7:414. [PMID: 38580839 PMCID: PMC10997628 DOI: 10.1038/s42003-024-06096-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 03/22/2024] [Indexed: 04/07/2024] Open
Abstract
Understanding the genetic architecture of brain structure is challenging, partly due to difficulties in designing robust, non-biased descriptors of brain morphology. Until recently, brain measures for genome-wide association studies (GWAS) consisted of traditionally expert-defined or software-derived image-derived phenotypes (IDPs) that are often based on theoretical preconceptions or computed from limited amounts of data. Here, we present an approach to derive brain imaging phenotypes using unsupervised deep representation learning. We train a 3-D convolutional autoencoder model with reconstruction loss on 6130 UK Biobank (UKBB) participants' T1 or T2-FLAIR (T2) brain MRIs to create a 128-dimensional representation known as Unsupervised Deep learning derived Imaging Phenotypes (UDIPs). GWAS of these UDIPs in held-out UKBB subjects (n = 22,880 discovery and n = 12,359/11,265 replication cohorts for T1/T2) identified 9457 significant SNPs organized into 97 independent genetic loci of which 60 loci were replicated. Twenty-six loci were not reported in earlier T1 and T2 IDP-based UK Biobank GWAS. We developed a perturbation-based decoder interpretation approach to show that these loci are associated with UDIPs mapped to multiple relevant brain regions. Our results established unsupervised deep learning can derive robust, unbiased, heritable, and interpretable brain imaging phenotypes.
Collapse
Affiliation(s)
- Khush Patel
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Ziqian Xie
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Hao Yuan
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, 77843, USA
| | | | - Yaochen Xie
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - Wei He
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Wanheng Zhang
- School of Public Health, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Assaf Gottlieb
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Han Chen
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
- School of Public Health, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Luca Giancardo
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Alexander Knaack
- Department of Neurology and Imaging of Dementia and Aging (IDeA) Laboratory, University of California at Davis, Davis, CA, 95618, USA
| | - Evan Fletcher
- Department of Neurology and Imaging of Dementia and Aging (IDeA) Laboratory, University of California at Davis, Davis, CA, 95618, USA
| | - Myriam Fornage
- School of Public Health, University of Texas Health Science Center, Houston, TX, 77030, USA
- McGovern Medical School, University of Texas Health Science Center, Houston, TX, 77030, USA
| | - Shuiwang Ji
- Department of Computer Science and Engineering, Texas A&M University, College Station, TX, 77843, USA
| | - Degui Zhi
- McWilliams School of Biomedical Informatics, University of Texas Health Science Center, Houston, TX, 77030, USA.
| |
Collapse
|
6
|
Schultz V, Hedderich DM, Schmitz-Koep B, Schinz D, Zimmer C, Yakushev I, Apostolova I, Özden C, Opfer R, Buchert R. Removing outliers from the normative database improves regional atrophy detection in single-subject voxel-based morphometry. Neuroradiology 2024; 66:507-519. [PMID: 38378906 PMCID: PMC10937771 DOI: 10.1007/s00234-024-03304-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 02/03/2024] [Indexed: 02/22/2024]
Abstract
PURPOSE Single-subject voxel-based morphometry (VBM) compares an individual T1-weighted MRI to a sample of normal MRI in a normative database (NDB) to detect regional atrophy. Outliers in the NDB might result in reduced sensitivity of VBM. The primary aim of the current study was to propose a method for outlier removal ("NDB cleaning") and to test its impact on the performance of VBM for detection of Alzheimer's disease (AD) and frontotemporal lobar degeneration (FTLD). METHODS T1-weighted MRI of 81 patients with biomarker-confirmed AD (n = 51) or FTLD (n = 30) and 37 healthy subjects with simultaneous FDG-PET/MRI were included as test dataset. Two different NDBs were used: a scanner-specific NDB (37 healthy controls from the test dataset) and a non-scanner-specific NDB comprising 164 normal T1-weighted MRI from 164 different MRI scanners. Three different quality metrics based on leave-one-out testing of the scans in the NDB were implemented. A scan was removed if it was an outlier with respect to one or more quality metrics. VBM maps generated with and without NDB cleaning were assessed visually for the presence of AD or FTLD. RESULTS Specificity of visual interpretation of the VBM maps for detection of AD or FTLD was 100% in all settings. Sensitivity was increased by NDB cleaning with both NDBs. The effect was statistically significant for the multiple-scanner NDB (from 0.47 [95%-CI 0.36-0.58] to 0.61 [0.49-0.71]). CONCLUSION NDB cleaning has the potential to improve the sensitivity of VBM for the detection of AD or FTLD without increasing the risk of false positive findings.
Collapse
Affiliation(s)
- Vivian Schultz
- Department of Neuroradiology, Klinikum Rechts Der Isar, Technical University of Munich, School of Medicine and Health, Ismaninger Str. 22, 81675, Munich, Germany.
| | - Dennis M Hedderich
- Department of Neuroradiology, Klinikum Rechts Der Isar, Technical University of Munich, School of Medicine and Health, Ismaninger Str. 22, 81675, Munich, Germany
| | - Benita Schmitz-Koep
- Department of Neuroradiology, Klinikum Rechts Der Isar, Technical University of Munich, School of Medicine and Health, Ismaninger Str. 22, 81675, Munich, Germany
| | - David Schinz
- Department of Neuroradiology, Klinikum Rechts Der Isar, Technical University of Munich, School of Medicine and Health, Ismaninger Str. 22, 81675, Munich, Germany
- Institute of Radiology, University Hospital Erlangen, Friedrich-Alexander-Universität Erlangen (FAU), Nürnberg, Germany
| | - Claus Zimmer
- Department of Neuroradiology, Klinikum Rechts Der Isar, Technical University of Munich, School of Medicine and Health, Ismaninger Str. 22, 81675, Munich, Germany
| | - Igor Yakushev
- Department of Nuclear Medicine, Klinikum Rechts Der Isar, Technical University of Munich, School of Medicine and Health, Munich, Germany
| | - Ivayla Apostolova
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - Cansu Özden
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | | | - Ralph Buchert
- Department of Diagnostic and Interventional Radiology and Nuclear Medicine, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
7
|
Kim MJ, Hong E, Yum MS, Lee YJ, Kim J, Ko TS. Deep learning-based, fully automated, pediatric brain segmentation. Sci Rep 2024; 14:4344. [PMID: 38383725 PMCID: PMC10881508 DOI: 10.1038/s41598-024-54663-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 02/15/2024] [Indexed: 02/23/2024] Open
Abstract
The purpose of this study was to demonstrate the performance of a fully automated, deep learning-based brain segmentation (DLS) method in healthy controls and in patients with neurodevelopmental disorders, SCN1A mutation, under eleven. The whole, cortical, and subcortical volumes of previously enrolled 21 participants, under 11 years of age, with a SCN1A mutation, and 42 healthy controls, were obtained using a DLS method, and compared to volumes measured by Freesurfer with manual correction. Additionally, the volumes which were calculated with the DLS method between the patients and the control group. The volumes of total brain gray and white matter using DLS method were consistent with that volume which were measured by Freesurfer with manual correction in healthy controls. Among 68 cortical parcellated volume analysis, the volumes of only 7 areas measured by DLS methods were significantly different from that measured by Freesurfer with manual correction, and the differences decreased with increasing age in the subgroup analysis. The subcortical volume measured by the DLS method was relatively smaller than that of the Freesurfer volume analysis. Further, the DLS method could perfectly detect the reduced volume identified by the Freesurfer software and manual correction in patients with SCN1A mutations, compared with healthy controls. In a pediatric population, this new, fully automated DLS method is compatible with the classic, volumetric analysis with Freesurfer software and manual correction, and it can also well detect brain morphological changes in children with a neurodevelopmental disorder.
Collapse
Affiliation(s)
- Min-Jee Kim
- Department of Pediatrics, Asan Medical Center Children's Hospital, Ulsan University College of Medicine, 88, Olympic-ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea
| | | | - Mi-Sun Yum
- Department of Pediatrics, Asan Medical Center Children's Hospital, Ulsan University College of Medicine, 88, Olympic-ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea.
| | - Yun-Jeong Lee
- Department of Pediatrics, Kyungpook National University Hospital and School of Medicine, Kyungpook National University, Daegu, South Korea
| | | | - Tae-Sung Ko
- Department of Pediatrics, Asan Medical Center Children's Hospital, Ulsan University College of Medicine, 88, Olympic-ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea
| |
Collapse
|
8
|
Teng Y, Chen C, Shu X, Zhao F, Zhang L, Xu J. Automated, fast, robust brain extraction on contrast-enhanced T1-weighted MRI in presence of brain tumors: an optimized model based on multi-center datasets. Eur Radiol 2024; 34:1190-1199. [PMID: 37615767 PMCID: PMC10853304 DOI: 10.1007/s00330-023-10078-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Revised: 07/12/2023] [Accepted: 07/14/2023] [Indexed: 08/25/2023]
Abstract
OBJECTIVES Existing brain extraction models should be further optimized to provide more information for oncological analysis. We aimed to develop an nnU-Net-based deep learning model for automated brain extraction on contrast-enhanced T1-weighted (T1CE) images in presence of brain tumors. METHODS This is a multi-center, retrospective study involving 920 patients. A total of 720 cases with four types of intracranial tumors from private institutions were collected and set as the training group and the internal test group. Mann-Whitney U test (U test) was used to investigate if the model performance was associated with pathological types and tumor characteristics. Then, the generalization of model was independently tested on public datasets consisting of 100 glioma and 100 vestibular schwannoma cases. RESULTS In the internal test, the model achieved promising performance with median Dice similarity coefficient (DSC) of 0.989 (interquartile range (IQR), 0.988-0.991), and Hausdorff distance (HD) of 6.403 mm (IQR, 5.099-8.426 mm). U test suggested a slightly descending performance in meningioma and vestibular schwannoma group. The results of U test also suggested that there was a significant difference in peritumoral edema group, with median DSC of 0.990 (IQR, 0.989-0.991, p = 0.002), and median HD of 5.916 mm (IQR, 5.000-8.000 mm, p = 0.049). In the external test, our model also showed to be robust performance, with median DSC of 0.991 (IQR, 0.983-0.998) and HD of 8.972 mm (IQR, 6.164-13.710 mm). CONCLUSIONS For automated processing of MRI neuroimaging data presence of brain tumors, the proposed model can perform brain extraction including important superficial structures for oncological analysis. CLINICAL RELEVANCE STATEMENT The proposed model serves as a radiological tool for image preprocessing in tumor cases, focusing on superficial brain structures, which could streamline the workflow and enhance the efficiency of subsequent radiological assessments. KEY POINTS • The nnU-Net-based model is capable of segmenting significant superficial structures in brain extraction. • The proposed model showed feasible performance, regardless of pathological types or tumor characteristics. • The model showed generalization in the public datasets.
Collapse
Affiliation(s)
- Yuen Teng
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China
| | - Chaoyue Chen
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China.
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China.
- West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China.
| | - Xin Shu
- College of Computer Science, Sichuan University, Chengdu, People's Republic of China
| | - Fumin Zhao
- Department of Radiology, West China Second University Hospital, Sichuan University, Chengdu, China
| | - Lei Zhang
- College of Computer Science, Sichuan University, Chengdu, People's Republic of China.
| | - Jianguo Xu
- Department of Neurosurgery, West China Hospital, Sichuan University, Chengdu, China.
- Department of Radiology, West China Hospital, Sichuan University, Chengdu, China.
- West China Hospital, No. 37, GuoXue Alley, Chengdu, 610041, People's Republic of China.
| |
Collapse
|
9
|
Kumar K, Yeo AU, McIntosh L, Kron T, Wheeler G, Franich RD. Deep Learning Auto-Segmentation Network for Pediatric Computed Tomography Data Sets: Can We Extrapolate From Adults? Int J Radiat Oncol Biol Phys 2024:S0360-3016(24)00245-1. [PMID: 38246249 DOI: 10.1016/j.ijrobp.2024.01.201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 12/10/2023] [Accepted: 01/07/2024] [Indexed: 01/23/2024]
Abstract
PURPOSE Artificial intelligence (AI)-based auto-segmentation models hold promise for enhanced efficiency and consistency in organ contouring for adaptive radiation therapy and radiation therapy planning. However, their performance on pediatric computed tomography (CT) data and cross-scanner compatibility remain unclear. This study aimed to evaluate the performance of AI-based auto-segmentation models trained on adult CT data when applied to pediatric data sets and explore the improvement in performance gained by including pediatric training data. It also examined their ability to accurately segment CT data acquired from different scanners. METHODS AND MATERIALS Using the nnU-Net framework, segmentation models were trained on data sets of adult, pediatric, and combined CT scans for 7 pelvic/thoracic organs. Each model was trained on 290 to 300 cases per category and organ. Training data sets included a combination of clinical data and several open repositories. The study incorporated a database of 459 pediatric (0-16 years) CT scans and 950 adults (>18 years), ensuring all scans had human expert ground-truth contours of the selected organs. Performance was evaluated based on Dice similarity coefficients (DSC) of the model-generated contours. RESULTS AI models trained exclusively on adult data underperformed on pediatric data, especially for the 0 to 2 age group: mean DSC was below 0.5 for the bladder and spleen. The addition of pediatric training data demonstrated significant improvement for all age groups, achieving a mean DSC of above 0.85 for all organs in every age group. Larger organs like the liver and kidneys maintained consistent performance for all models across age groups. No significant difference emerged in the cross-scanner performance evaluation, suggesting robust cross-scanner generalization. CONCLUSIONS For optimal segmentation across age groups, it is important to include pediatric data in the training of segmentation models. The successful cross-scanner generalization also supports the real-world clinical applicability of these AI models. This study emphasizes the significance of data set diversity in training robust AI systems for medical image interpretation tasks.
Collapse
Affiliation(s)
- Kartik Kumar
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia
| | - Adam U Yeo
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Lachlan McIntosh
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia
| | - Tomas Kron
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia; Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales, Australia
| | - Greg Wheeler
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; Sir Peter MacCallum Department of Oncology, University of Melbourne, Melbourne, Victoria, Australia
| | - Rick D Franich
- Physical Sciences Department, Peter MacCallum Cancer Centre, Victoria, Australia; School of Science, RMIT University, Melbourne, Victoria, Australia.
| |
Collapse
|
10
|
Zhou D, Xu L, Wang T, Wei S, Gao F, Lai X, Cao J. M-DDC: MRI based demyelinative diseases classification with U-Net segmentation and convolutional network. Neural Netw 2024; 169:108-119. [PMID: 37890361 DOI: 10.1016/j.neunet.2023.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 09/03/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Childhood demyelinative diseases classification (DDC) with brain magnetic resonance imaging (MRI) is crucial to clinical diagnosis. But few attentions have been paid to DDC in the past. How to accurately differentiate pediatric-onset neuromyelitis optica spectrum disorder (NMOSD) from acute disseminated encephalomyelitis (ADEM) based on MRI is challenging in DDC. In this paper, a novel architecture M-DDC based on joint U-Net segmentation network and deep convolutional network is developed. The U-Net segmentation can provide pixel-level structure information, that helps the lesion areas location and size estimation. The classification branch in DDC can detect the regions of interest inside MRIs, including the white matter regions where lesions appear. The performance of the proposed method is evaluated on MRIs of 201 subjects recorded from the Children's Hospital of Zhejiang University School of Medicine. The comparisons show that the proposed DDC achieves the highest accuracy of 99.19% and dice of 71.1% for ADEM and NMOSD classification and segmentation, respectively.
Collapse
Affiliation(s)
- Deyang Zhou
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China; HDU-ITMO Joint Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Lu Xu
- Department of Neurology, Children's Hospital, Zhejiang University School of Medicine, 310018, China.
| | - Tianlei Wang
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Shaonong Wei
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China; HDU-ITMO Joint Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Feng Gao
- Department of Neurology, Children's Hospital, Zhejiang University School of Medicine, 310018, China.
| | - Xiaoping Lai
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Jiuwen Cao
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| |
Collapse
|
11
|
Qiang YR, Zhang SW, Li JN, Li Y, Zhou QY. Diagnosis of Alzheimer's disease by joining dual attention CNN and MLP based on structural MRIs, clinical and genetic data. Artif Intell Med 2023; 145:102678. [PMID: 37925204 DOI: 10.1016/j.artmed.2023.102678] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 07/12/2023] [Accepted: 10/03/2023] [Indexed: 11/06/2023]
Abstract
Alzheimer's disease (AD) is an irreversible central nervous degenerative disease, while mild cognitive impairment (MCI) is a precursor state of AD. Accurate early diagnosis of AD is conducive to the prevention and early intervention treatment of AD. Although some computational methods have been developed for AD diagnosis, most employ only neuroimaging, ignoring other data (e.g., genetic, clinical) that may have potential disease information. In addition, the results of some methods lack interpretability. In this work, we proposed a novel method (called DANMLP) of joining dual attention convolutional neural network (CNN) and multilayer perceptron (MLP) for computer-aided AD diagnosis by integrating multi-modality data of the structural magnetic resonance imaging (sMRI), clinical data (i.e., demographics, neuropsychology), and APOE genetic data. Our DANMLP consists of four primary components: (1) the Patch-CNN for extracting the image characteristics from each local patch, (2) the position self-attention block for capturing the dependencies between features within a patch, (3) the channel self-attention block for capturing dependencies of inter-patch features, (4) two MLP networks for extracting the clinical features and outputting the AD classification results, respectively. Compared with other state-of-the-art methods in the 5CV test, DANMLP achieves 93% and 82.4% classification accuracy for the AD vs. MCI and MCI vs. NC tasks on the ADNI database, which is 0.2%∼15.2% and 3.4%∼26.8% higher than that of other five methods, respectively. The individualized visualization of focal areas can also help clinicians in the early diagnosis of AD. These results indicate that DANMLP can be effectively used for diagnosing AD and MCI patients.
Collapse
Affiliation(s)
- Yan-Rui Qiang
- Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China
| | - Shao-Wu Zhang
- Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China.
| | - Jia-Ni Li
- Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China
| | - Yan Li
- Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China
| | - Qin-Yi Zhou
- Key Laboratory of Information Fusion Technology, School of Automation, Northwestern Polytechnical University, Xi'an, 710072, China
| |
Collapse
|
12
|
Peng T, Gu Y, Ruan SJ, Wu QJ, Cai J. Novel Solution for Using Neural Networks for Kidney Boundary Extraction in 2D Ultrasound Data. Biomolecules 2023; 13:1548. [PMID: 37892229 PMCID: PMC10604927 DOI: 10.3390/biom13101548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Revised: 09/30/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023] Open
Abstract
Background and Objective: Kidney ultrasound (US) imaging is a significant imaging modality for evaluating kidney health and is essential for diagnosis, treatment, surgical intervention planning, and follow-up assessments. Kidney US image segmentation consists of extracting useful objects or regions from the total image, which helps determine tissue organization and improve diagnosis. Thus, obtaining accurate kidney segmentation data is an important first step for precisely diagnosing kidney diseases. However, manual delineation of the kidney in US images is complex and tedious in clinical practice. To overcome these challenges, we developed a novel automatic method for US kidney segmentation. Methods: Our method comprises two cascaded steps for US kidney segmentation. The first step utilizes a coarse segmentation procedure based on a deep fusion learning network to roughly segment each input US kidney image. The second step utilizes a refinement procedure to fine-tune the result of the first step by combining an automatic searching polygon tracking method with a machine learning network. In the machine learning network, a suitable and explainable mathematical formula for kidney contours is denoted by basic parameters. Results: Our method is assessed using 1380 trans-abdominal US kidney images obtained from 115 patients. Based on comprehensive comparisons of different noise levels, our method achieves accurate and robust results for kidney segmentation. We use ablation experiments to assess the significance of each component of the method. Compared with state-of-the-art methods, the evaluation metrics of our method are significantly higher. The Dice similarity coefficient (DSC) of our method is 94.6 ± 3.4%, which is higher than those of recent deep learning and hybrid algorithms (89.4 ± 7.1% and 93.7 ± 3.8%, respectively). Conclusions: We develop a coarse-to-refined architecture for the accurate segmentation of US kidney images. It is important to precisely extract kidney contour features because segmentation errors can cause under-dosing of the target or over-dosing of neighboring normal tissues during US-guided brachytherapy. Hence, our method can be used to increase the rigor of kidney US segmentation.
Collapse
Affiliation(s)
- Tao Peng
- School of Future Science and Engineering, Soochow University, Suzhou 215006, China
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX 75390, USA
| | - Yidong Gu
- Department of Medical Ultrasound, Suzhou Municipal Hospital, Suzhou 215000, China;
| | - Shanq-Jang Ruan
- Department of Electronic and Computer Engineering, National Taiwan University of Science and Technology, Taipei City 10607, Taiwan;
| | - Qingrong Jackie Wu
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC 27710, USA;
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
13
|
Zhang C, Li M, Luo Z, Xiao R, Li B, Shi J, Zeng C, Sun B, Xu X, Yang H. Deep learning-driven MRI trigeminal nerve segmentation with SEVB-net. Front Neurosci 2023; 17:1265032. [PMID: 37920295 PMCID: PMC10618361 DOI: 10.3389/fnins.2023.1265032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 09/29/2023] [Indexed: 11/04/2023] Open
Abstract
Purpose Trigeminal neuralgia (TN) poses significant challenges in its diagnosis and treatment due to its extreme pain. Magnetic resonance imaging (MRI) plays a crucial role in diagnosing TN and understanding its pathogenesis. Manual delineation of the trigeminal nerve in volumetric images is time-consuming and subjective. This study introduces a Squeeze and Excitation with BottleNeck V-Net (SEVB-Net), a novel approach for the automatic segmentation of the trigeminal nerve in three-dimensional T2 MRI volumes. Methods We enrolled 88 patients with trigeminal neuralgia and 99 healthy volunteers, dividing them into training and testing groups. The SEVB-Net was designed for end-to-end training, taking three-dimensional T2 images as input and producing a segmentation volume of the same size. We assessed the performance of the basic V-Net, nnUNet, and SEVB-Net models by calculating the Dice similarity coefficient (DSC), sensitivity, precision, and network complexity. Additionally, we used the Mann-Whitney U test to compare the time required for manual segmentation and automatic segmentation with manual modification. Results In the testing group, the experimental results demonstrated that the proposed method achieved state-of-the-art performance. SEVB-Net combined with the ωDoubleLoss loss function achieved a DSC ranging from 0.6070 to 0.7923. SEVB-Net combined with the ωDoubleLoss method and nnUNet combined with the DoubleLoss method, achieved DSC, sensitivity, and precision values exceeding 0.7. However, SEVB-Net significantly reduced the number of parameters (2.20 M), memory consumption (11.41 MB), and model size (17.02 MB), resulting in improved computation and forward time compared with nnUNet. The difference in average time between manual segmentation and automatic segmentation with manual modification for both radiologists was statistically significant (p < 0.001). Conclusion The experimental results demonstrate that the proposed method can automatically segment the root and three main branches of the trigeminal nerve in three-dimensional T2 images. SEVB-Net, compared with the basic V-Net model, showed improved segmentation performance and achieved a level similar to nnUNet. The segmentation volumes of both SEVB-Net and nnUNet aligned with expert annotations but SEVB-Net displayed a more lightweight feature.
Collapse
Affiliation(s)
- Chuan Zhang
- The First Affiliated Hospital, Jinan University, Guangzhou, China
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Man Li
- Shanghai United Imaging Intelligence, Co., Ltd., Shanghai, China
| | - Zheng Luo
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Ruhui Xiao
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Bing Li
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Jing Shi
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Chen Zeng
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - BaiJinTao Sun
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Xiaoxue Xu
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| | - Hanfeng Yang
- The First Affiliated Hospital, Jinan University, Guangzhou, China
- Department of Radiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, China
| |
Collapse
|
14
|
Peng T, Wu Y, Gu Y, Xu D, Wang C, Li Q, Cai J. Intelligent contour extraction approach for accurate segmentation of medical ultrasound images. Front Physiol 2023; 14:1177351. [PMID: 37675280 PMCID: PMC10479019 DOI: 10.3389/fphys.2023.1177351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 07/28/2023] [Indexed: 09/08/2023] Open
Abstract
Introduction: Accurate contour extraction in ultrasound images is of great interest for image-guided organ interventions and disease diagnosis. Nevertheless, it remains a problematic issue owing to the missing or ambiguous outline between organs (i.e., prostate and kidney) and surrounding tissues, the appearance of shadow artifacts, and the large variability in the shape of organs. Methods: To address these issues, we devised a method that includes four stages. In the first stage, the data sequence is acquired using an improved adaptive selection principal curve method, in which a limited number of radiologist defined data points are adopted as the prior. The second stage then uses an enhanced quantum evolution network to help acquire the optimal neural network. The third stage involves increasing the precision of the experimental outcomes after training the neural network, while using the data sequence as the input. In the final stage, the contour is smoothed using an explicable mathematical formula explained by the model parameters of the neural network. Results: Our experiments showed that our approach outperformed other current methods, including hybrid and Transformer-based deep-learning methods, achieving an average Dice similarity coefficient, Jaccard similarity coefficient, and accuracy of 95.7 ± 2.4%, 94.6 ± 2.6%, and 95.3 ± 2.6%, respectively. Discussion: This work develops an intelligent contour extraction approach on ultrasound images. Our approach obtained more satisfactory outcome compared with recent state-of-the-art approaches . The knowledge of precise boundaries of the organ is significant for the conservation of risk structures. Our developed approach has the potential to enhance disease diagnosis and therapeutic outcomes.
Collapse
Affiliation(s)
- Tao Peng
- School of Future Science and Engineering, Soochow University, Suzhou, China
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, United States
| | - Yiyun Wu
- Department of Ultrasound, Jiangsu Province Hospital of Chinese Medicine, Nanjing, Jiangsu, China
| | - Yidong Gu
- Department of Medical Ultrasound, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou Municipal Hospital, Suzhou, Jiangsu, China
| | - Daqiang Xu
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou Municipal Hospital, Suzhou, Jiangsu, China
| | - Caishan Wang
- Department of Ultrasound, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Quan Li
- Center of Stomatology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| |
Collapse
|
15
|
Yousaf F, Iqbal S, Fatima N, Kousar T, Shafry Mohd Rahim M. Multi-class disease detection using deep learning and human brain medical imaging. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
|
16
|
Maken P, Gupta A, Gupta MK. A systematic review of the techniques for automatic segmentation of the human upper airway using volumetric images. Med Biol Eng Comput 2023; 61:1901-1927. [PMID: 37248380 DOI: 10.1007/s11517-023-02842-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 04/20/2023] [Indexed: 05/31/2023]
Abstract
The human upper airway is comprised of many anatomical volumes. The obstructions in the upper airway volumes are needed to be diagnosed which requires volumetric segmentation. Manual segmentation is time-consuming and requires expertise in the field. Automatic segmentation provides reliable results and also saves time and effort for the expert. The objective of this study is to systematically review the literature to study various techniques used for the automatic segmentation of the human upper airway regions in volumetric images. PRISMA guidelines were followed to conduct the systematic review. Four online databases Scopus, Google Scholar, PubMed, and JURN were used for the searching of the relevant papers. The relevant papers were shortlisted using inclusion and exclusion eligibility criteria. Three review questions were made and explored to find their answers. The best technique among all the literature studies based on the Dice coefficient and precision was identified and justified through the analysis. This systematic review provides insight to the researchers so that they shall be able to overcome the prominent issues in the field identified from the literature. The outcome of the review is based on several parameters, e.g., accuracy, techniques, challenges, datasets, and segmentation of different sub-regions. Flowchart of the search process as per PRISMA guidelines along with inclusion and exclusion criteria.
Collapse
Affiliation(s)
- Payal Maken
- School of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra, India
| | - Abhishek Gupta
- Biomedical Application Division, CSIR-Central Scientific Instruments Organisation, Chandigarh, 160030, India.
| | - Manoj Kumar Gupta
- School of Computer Science and Engineering, Shri Mata Vaishno Devi University, Katra, India
| |
Collapse
|
17
|
Mujahid M, Rehman A, Alam T, Alamri FS, Fati SM, Saba T. An Efficient Ensemble Approach for Alzheimer's Disease Detection Using an Adaptive Synthetic Technique and Deep Learning. Diagnostics (Basel) 2023; 13:2489. [PMID: 37568852 PMCID: PMC10417320 DOI: 10.3390/diagnostics13152489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 07/05/2023] [Accepted: 07/08/2023] [Indexed: 08/13/2023] Open
Abstract
Alzheimer's disease is an incurable neurological disorder that leads to a gradual decline in cognitive abilities, but early detection can significantly mitigate symptoms. The automatic diagnosis of Alzheimer's disease is more important due to the shortage of expert medical staff, because it reduces the burden on medical staff and enhances the results of diagnosis. A detailed analysis of specific brain disorder tissues is required to accurately diagnose the disease via segmented magnetic resonance imaging (MRI). Several studies have used the traditional machine-learning approaches to diagnose the disease from MRI, but manual extracted features are more complex, time-consuming, and require a huge amount of involvement from expert medical staff. The traditional approach does not provide an accurate diagnosis. Deep learning has automatic extraction features and optimizes the training process. The Magnetic Resonance Imaging (MRI) Alzheimer's disease dataset consists of four classes: mild demented (896 images), moderate demented (64 images), non-demented (3200 images), and very mild demented (2240 images). The dataset is highly imbalanced. Therefore, we used the adaptive synthetic oversampling technique to address this issue. After applying this technique, the dataset was balanced. The ensemble of VGG16 and EfficientNet was used to detect Alzheimer's disease on both imbalanced and balanced datasets to validate the performance of the models. The proposed method combined the predictions of multiple models to make an ensemble model that learned complex and nuanced patterns from the data. The input and output of both models were concatenated to make an ensemble model and then added to other layers to make a more robust model. In this study, we proposed an ensemble of EfficientNet-B2 and VGG-16 to diagnose the disease at an early stage with the highest accuracy. Experiments were performed on two publicly available datasets. The experimental results showed that the proposed method achieved 97.35% accuracy and 99.64% AUC for multiclass datasets and 97.09% accuracy and 99.59% AUC for binary-class datasets. We evaluated that the proposed method was extremely efficient and provided superior performance on both datasets as compared to previous methods.
Collapse
Affiliation(s)
- Muhammad Mujahid
- Department of Computer Science, Khwaja Fareed University of Engineering and Information Technology, Rahim Yar Khan 64200, Pakistan;
| | - Amjad Rehman
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia; (A.R.); (S.M.F.); (T.S.)
| | - Teg Alam
- Department of Industrial Engineering, College of Engineering, Prince Sattam bin Abdulaziz University, Al Kharj 11942, Saudi Arabia;
| | - Faten S. Alamri
- Department of Mathematical Sciences, College of Science, Princess Nourah Bint Abdulrahman University, P.O.Box 84428, Riyadh 11671, Saudi Arabia
| | - Suliman Mohamed Fati
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia; (A.R.); (S.M.F.); (T.S.)
| | - Tanzila Saba
- Artificial Intelligence & Data Analytics Lab CCIS, Prince Sultan University, Riyadh 11586, Saudi Arabia; (A.R.); (S.M.F.); (T.S.)
| |
Collapse
|
18
|
Raad JD, Chinnam RB, Arslanturk S, Tan S, Jeong JW, Mody S. Unsupervised abnormality detection in neonatal MRI brain scans using deep learning. Sci Rep 2023; 13:11489. [PMID: 37460615 PMCID: PMC10352269 DOI: 10.1038/s41598-023-38430-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 07/07/2023] [Indexed: 07/20/2023] Open
Abstract
Analysis of 3D medical imaging data has been a large topic of focus in the area of Machine Learning/Artificial Intelligence, though little work has been done in algorithmic (particularly unsupervised) analysis of neonatal brain MRI's. A myriad of conditions can manifest at an early age, including neonatal encephalopathy (NE), which can result in lifelong physical consequences. As such, there is a dire need for better biomarkers of NE and other conditions. The objective of the study is to improve identification of anomalies and prognostication of neonatal MRI brain scans. We introduce a framework designed to support the analysis and assessment of neonatal MRI brain scans, the results of which can be used as an aid to neuroradiologists. We explored the efficacy of the framework through iterations of several deep convolutional Autoencoder (AE) unsupervised modeling architectures designed to learn normalcy of the neonatal brain structure. We tested this framework on the developing human connectome project (dHCP) dataset with 97 patients that were previously categorized by severity. Our framework demonstrated the model's ability to identify and distinguish subtle morphological signatures present in brain structures. Normal and abnormal neonatal brain scans can be distinguished with reasonable accuracy, correctly categorizing them in up to 83% of cases. Most critically, new brain anomalies originally missed during the radiological reading were identified and corroborated by a neuroradiologist. This framework and our modeling approach demonstrate an ability to improve prognostication of neonatal brain conditions and are able to localize new anomalies.
Collapse
Affiliation(s)
- Jad Dino Raad
- Industrial and Systems Engineering Department, Wayne State University, Detroit, MI, 48201, USA
| | - Ratna Babu Chinnam
- Industrial and Systems Engineering Department, Wayne State University, Detroit, MI, 48201, USA
| | - Suzan Arslanturk
- Computer Science Department, Wayne State University, Detroit, MI, 48201, USA.
| | - Sidhartha Tan
- Department of Pediatrics, Wayne State University, Detroit, MI, 48201, USA
| | - Jeong-Won Jeong
- Department of Pediatrics, Wayne State University, Detroit, MI, 48201, USA
| | - Swati Mody
- Division of Pediatric Radiology, University of Michigan, Ann Arbor, MI, 48109, USA
| |
Collapse
|
19
|
Terzi R. An Ensemble of Deep Learning Object Detection Models for Anatomical and Pathological Regions in Brain MRI. Diagnostics (Basel) 2023; 13:diagnostics13081494. [PMID: 37189595 DOI: 10.3390/diagnostics13081494] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/17/2023] Open
Abstract
This paper proposes ensemble strategies for the deep learning object detection models carried out by combining the variants of a model and different models to enhance the anatomical and pathological object detection performance in brain MRI. In this study, with the help of the novel Gazi Brains 2020 dataset, five different anatomical parts and one pathological part that can be observed in brain MRI were identified, such as the region of interest, eye, optic nerves, lateral ventricles, third ventricle, and a whole tumor. Firstly, comprehensive benchmarking of the nine state-of-the-art object detection models was carried out to determine the capabilities of the models in detecting the anatomical and pathological parts. Then, four different ensemble strategies for nine object detectors were applied to boost the detection performance using the bounding box fusion technique. The ensemble of individual model variants increased the anatomical and pathological object detection performance by up to 10% in terms of the mean average precision (mAP). In addition, considering the class-based average precision (AP) value of the anatomical parts, an up to 18% AP improvement was achieved. Similarly, the ensemble strategy of the best different models outperformed the best individual model by 3.3% mAP. Additionally, while an up to 7% better FAUC, which is the area under the TPR vs. FPPI curve, was achieved on the Gazi Brains 2020 dataset, a 2% better FAUC score was obtained on the BraTS 2020 dataset. The proposed ensemble strategies were found to be much more efficient in finding the anatomical and pathological parts with a small number of anatomic objects, such as the optic nerve and third ventricle, and producing higher TPR values, especially at low FPPI values, compared to the best individual methods.
Collapse
Affiliation(s)
- Ramazan Terzi
- Department of Big Data and Artificial Intelligence, Digital Transformation Office of the Presidency of Republic of Türkiye, Ankara 06100, Turkey
- Department of Computer Engineering, Amasya University, Amasya 05100, Turkey
| |
Collapse
|
20
|
Vaisband M, Schubert M, Gassner FJ, Geisberger R, Greil R, Zaborsky N, Hasenauer J. Validation of genetic variants from NGS data using deep convolutional neural networks. BMC Bioinformatics 2023; 24:158. [PMID: 37081386 PMCID: PMC10116675 DOI: 10.1186/s12859-023-05255-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Accepted: 03/27/2023] [Indexed: 04/22/2023] Open
Abstract
Accurate somatic variant calling from next-generation sequencing data is one most important tasks in personalised cancer therapy. The sophistication of the available technologies is ever-increasing, yet, manual candidate refinement is still a necessary step in state-of-the-art processing pipelines. This limits reproducibility and introduces a bottleneck with respect to scalability. We demonstrate that the validation of genetic variants can be improved using a machine learning approach resting on a Convolutional Neural Network, trained using existing human annotation. In contrast to existing approaches, we introduce a way in which contextual data from sequencing tracks can be included into the automated assessment. A rigorous evaluation shows that the resulting model is robust and performs on par with trained researchers following published standard operating procedure.
Collapse
Affiliation(s)
- Marc Vaisband
- Department of Internal Medicine III with Haematology, Medical Oncology, Haemostaseology, Infectiology and Rheumatology, Oncologic Center; Salzburg Cancer Research Institute - Laboratory for Immunological and Molecular Cancer Research (SCRI-LIMCR); Cancer Cluster Salzburg, Paracelsus Medical University, Salzburg, Austria.
- Life and Medical Sciences Institute, University of Bonn, Bonn, Germany.
| | - Maria Schubert
- Department of Internal Medicine III with Haematology, Medical Oncology, Haemostaseology, Infectiology and Rheumatology, Oncologic Center; Salzburg Cancer Research Institute - Laboratory for Immunological and Molecular Cancer Research (SCRI-LIMCR); Cancer Cluster Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Franz Josef Gassner
- Department of Internal Medicine III with Haematology, Medical Oncology, Haemostaseology, Infectiology and Rheumatology, Oncologic Center; Salzburg Cancer Research Institute - Laboratory for Immunological and Molecular Cancer Research (SCRI-LIMCR); Cancer Cluster Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Roland Geisberger
- Department of Internal Medicine III with Haematology, Medical Oncology, Haemostaseology, Infectiology and Rheumatology, Oncologic Center; Salzburg Cancer Research Institute - Laboratory for Immunological and Molecular Cancer Research (SCRI-LIMCR); Cancer Cluster Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Richard Greil
- Department of Internal Medicine III with Haematology, Medical Oncology, Haemostaseology, Infectiology and Rheumatology, Oncologic Center; Salzburg Cancer Research Institute - Laboratory for Immunological and Molecular Cancer Research (SCRI-LIMCR); Cancer Cluster Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Nadja Zaborsky
- Department of Internal Medicine III with Haematology, Medical Oncology, Haemostaseology, Infectiology and Rheumatology, Oncologic Center; Salzburg Cancer Research Institute - Laboratory for Immunological and Molecular Cancer Research (SCRI-LIMCR); Cancer Cluster Salzburg, Paracelsus Medical University, Salzburg, Austria
| | - Jan Hasenauer
- Life and Medical Sciences Institute, University of Bonn, Bonn, Germany
| |
Collapse
|
21
|
Rajapakse D, Meckstroth J, Jantz DT, Camarda KV, Yao Z, Leonard KC. Deconvoluting Kinetic Rate Constants of Catalytic Substrates from Scanning Electrochemical Approach Curves with Artificial Neural Networks. ACS MEASUREMENT SCIENCE AU 2023; 3:103-112. [PMID: 37090257 PMCID: PMC10120032 DOI: 10.1021/acsmeasuresciau.2c00056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2022] [Revised: 10/28/2022] [Accepted: 10/31/2022] [Indexed: 05/03/2023]
Abstract
Extracting information from experimental measurements in the chemical sciences typically requires curve fitting, deconvolution, and/or solving the governing partial differential equations via numerical (e.g., finite element analysis) or analytical methods. However, using numerical or analytical methods for high-throughput data analysis typically requires significant postprocessing efforts. Here, we show that deep learning artificial neural networks can be a very effective tool for extracting information from experimental data. As an example, reactivity and topography information from scanning electrochemical microscopy (SECM) approach curves are highly convoluted. This study utilized multilayer perceptrons and convolutional neural networks trained on simulated SECM data to extract kinetic rate constants of catalytic substrates. Our key findings were that multilayer perceptron models performed very well when the experimental data were close to the ideal conditions with which the model was trained. However, convolutional neural networks, which analyze images as opposed to direct data, were able to accurately predict the kinetic rate constant of Fe-doped nickel (oxy)hydroxide catalyst at different applied potentials even though the experimental approach curves were not ideal. Due to the speed at which machine learning models can analyze data, we believe this study shows that artificial neural networks could become powerful tools in high-throughput data analysis.
Collapse
Affiliation(s)
- Dinuka Rajapakse
- Department
of Chemical & Petroleum Engineering, The University of Kansas, 4132 Learned Hall, 1530 West 15th Street, Lawrence, Kansas66045, United States
- Center
for Environmentally Beneficial Catalysis, The University of Kansas, LSRL Building A, Suite 110, 1501 Wakarusa Drive, Lawrence, Kansas66047, United States
| | - Josh Meckstroth
- Department
of Chemical & Petroleum Engineering, The University of Kansas, 4132 Learned Hall, 1530 West 15th Street, Lawrence, Kansas66045, United States
| | - Dylan T. Jantz
- Department
of Chemical & Petroleum Engineering, The University of Kansas, 4132 Learned Hall, 1530 West 15th Street, Lawrence, Kansas66045, United States
- Center
for Environmentally Beneficial Catalysis, The University of Kansas, LSRL Building A, Suite 110, 1501 Wakarusa Drive, Lawrence, Kansas66047, United States
| | - Kyle Vincent Camarda
- Department
of Chemical & Petroleum Engineering, The University of Kansas, 4132 Learned Hall, 1530 West 15th Street, Lawrence, Kansas66045, United States
| | - Zijun Yao
- Department
of Electrical Engineering & Computer Science, The University of Kansas, 2001 Eaton Hall, 1520 West 15th Street, Lawrence, Kansas66045, United States
| | - Kevin C. Leonard
- Department
of Chemical & Petroleum Engineering, The University of Kansas, 4132 Learned Hall, 1530 West 15th Street, Lawrence, Kansas66045, United States
- Center
for Environmentally Beneficial Catalysis, The University of Kansas, LSRL Building A, Suite 110, 1501 Wakarusa Drive, Lawrence, Kansas66047, United States
| |
Collapse
|
22
|
Chang HH, Yeh SJ, Chiang MC, Hsieh ST. RU-Net: skull stripping in rat brain MR images after ischemic stroke with rat U-Net. BMC Med Imaging 2023; 23:44. [PMID: 36973775 PMCID: PMC10045128 DOI: 10.1186/s12880-023-00994-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 03/08/2023] [Indexed: 03/29/2023] Open
Abstract
BACKGROUND Experimental ischemic stroke models play a fundamental role in interpreting the mechanism of cerebral ischemia and appraising the development of pathological extent. An accurate and automatic skull stripping tool for rat brain image volumes with magnetic resonance imaging (MRI) are crucial in experimental stroke analysis. Due to the deficiency of reliable rat brain segmentation methods and motivated by the demand for preclinical studies, this paper develops a new skull stripping algorithm to extract the rat brain region in MR images after stroke, which is named Rat U-Net (RU-Net). METHODS Based on a U-shape like deep learning architecture, the proposed framework integrates batch normalization with the residual network to achieve efficient end-to-end segmentation. A pooling index transmission mechanism between the encoder and decoder is exploited to reinforce the spatial correlation. Two different modalities of diffusion-weighted imaging (DWI) and T2-weighted MRI (T2WI) corresponding to two in-house datasets with each consisting of 55 subjects were employed to evaluate the performance of the proposed RU-Net. RESULTS Extensive experiments indicated great segmentation accuracy across diversified rat brain MR images. It was suggested that our rat skull stripping network outperformed several state-of-the-art methods and achieved the highest average Dice scores of 98.04% (p < 0.001) and 97.67% (p < 0.001) in the DWI and T2WI image datasets, respectively. CONCLUSION The proposed RU-Net is believed to be potential for advancing preclinical stroke investigation and providing an efficient tool for pathological rat brain image extraction, where accurate segmentation of the rat brain region is fundamental.
Collapse
Affiliation(s)
- Herng-Hua Chang
- Computational Biomedical Engineering Laboratory (CBEL), Department of Engineering Science and Ocean Engineering, National Taiwan University, No. 1 Sec. 4 Roosevelt Road, Daan, Taipei, 10617, Taiwan.
| | - Shin-Joe Yeh
- Department of Neurology and Stroke Center, National Taiwan University Hospital, Taipei, 10002, Taiwan
| | - Ming-Chang Chiang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, 11221, Taiwan
| | - Sung-Tsang Hsieh
- Department of Neurology and Stroke Center, National Taiwan University Hospital, Taipei, 10002, Taiwan
- Graduate Institute of Anatomy and Cell Biology, College of Medicine, National Taiwan University, Taipei, 10051, Taiwan
- Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, 10051, Taiwan
- Graduate Institute of Brain and Mind Sciences, College of Medicine, National Taiwan University, Taipei, 10051, Taiwan
- Center of Precision Medicine, College of Medicine, National Taiwan University, Taipei, 10051, Taiwan
| |
Collapse
|
23
|
Kang H, Witanto JN, Pratama K, Lee D, Choi KS, Choi SH, Kim KM, Kim MS, Kim JW, Kim YH, Park SJ, Park CK. Fully Automated MRI Segmentation and Volumetric Measurement of Intracranial Meningioma Using Deep Learning. J Magn Reson Imaging 2023; 57:871-881. [PMID: 35775971 DOI: 10.1002/jmri.28332] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 06/16/2022] [Accepted: 06/16/2022] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND Accurate and rapid measurement of the MRI volume of meningiomas is essential in clinical practice to determine the growth rate of the tumor. Imperfect automation and disappointing performance for small meningiomas of previous automated volumetric tools limit their use in routine clinical practice. PURPOSE To develop and validate a computational model for fully automated meningioma segmentation and volume measurement on contrast-enhanced MRI scans using deep learning. STUDY TYPE Retrospective. POPULATION A total of 659 intracranial meningioma patients (median age, 59.0 years; interquartile range: 53.0-66.0 years) including 554 women and 105 men. FIELD STRENGTH/SEQUENCE The 1.0 T, 1.5 T, and 3.0 T; three-dimensional, T1 -weighted gradient-echo imaging with contrast enhancement. ASSESSMENT The tumors were manually segmented by two neurosurgeons, H.K. and C.-K.P., with 10 and 26 years of clinical experience, respectively, for use as the ground truth. Deep learning models based on U-Net and nnU-Net were trained using 459 subjects and tested for 100 patients from a single institution (internal validation set [IVS]) and 100 patients from other 24 institutions (external validation set [EVS]), respectively. The performance of each model was evaluated with the Sørensen-Dice similarity coefficient (DSC) compared with the ground truth. STATISTICAL TESTS According to the normality of the data distribution verified by the Shapiro-Wilk test, variables with three or more categories were compared by the Kruskal-Wallis test with Dunn's post hoc analysis. RESULTS A two-dimensional (2D) nnU-Net showed the highest median DSCs of 0.922 and 0.893 for the IVS and EVS, respectively. The nnU-Nets achieved superior performance in meningioma segmentation than the U-Nets. The DSCs of the 2D nnU-Net for small meningiomas less than 1 cm3 were 0.769 and 0.780 with the IVS and EVS, respectively. DATA CONCLUSION A fully automated and accurate volumetric measurement tool for meningioma with clinically applicable performance for small meningioma using nnU-Net was developed. EVIDENCE LEVEL 3 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Ho Kang
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | | | - Kevin Pratama
- Research and Science Division, Research and Development Center, MEDICALIP Co. Ltd, Seoul, Korea
| | - Doohee Lee
- Research and Science Division, Research and Development Center, MEDICALIP Co. Ltd, Seoul, Korea
| | - Kyu Sung Choi
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Seung Hong Choi
- Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Kyung-Min Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Min-Sung Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Jin Wook Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Yong Hwy Kim
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Sang Joon Park
- Research and Science Division, Research and Development Center, MEDICALIP Co. Ltd, Seoul, Korea.,Department of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| | - Chul-Kee Park
- Department of Neurosurgery, Seoul National University Hospital, Seoul National University College of Medicine, Seoul, Korea
| |
Collapse
|
24
|
Automatic segmentation of trabecular and cortical compartments in HR-pQCT images using an embedding-predicting U-Net and morphological post-processing. Sci Rep 2023; 13:252. [PMID: 36604534 PMCID: PMC9816121 DOI: 10.1038/s41598-022-27350-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Accepted: 12/30/2022] [Indexed: 01/07/2023] Open
Abstract
High-resolution peripheral quantitative computed tomography (HR-pQCT) is an emerging in vivo imaging modality for quantification of bone microarchitecture. However, extraction of quantitative microarchitectural parameters from HR-pQCT images requires an accurate segmentation of the image. The current standard protocol using semi-automated contouring for HR-pQCT image segmentation is laborious, introduces inter-operator biases into research data, and poses a barrier to streamlined clinical implementation. In this work, we propose and validate a fully automated algorithm for segmentation of HR-pQCT radius and tibia images. A multi-slice 2D U-Net produces initial segmentation predictions, which are post-processed via a sequence of traditional morphological image filters. The U-Net was trained on a large dataset containing 1822 images from 896 unique participants. Predicted segmentations were compared to reference segmentations on a disjoint dataset containing 386 images from 190 unique participants, and 156 pairs of repeated images were used to compare the precision of the novel and current protocols. The agreement of morphological parameters obtained using the predicted segmentation relative to the reference standard was excellent (R2 between 0.938 and > 0.999). Precision was significantly improved for several outputs, most notably cortical porosity. This novel and robust algorithm for automated segmentation will increase the feasibility of using HR-pQCT in research and clinical settings.
Collapse
|
25
|
Zhang H, Xi Q, Zhang F, Li Q, Jiao Z, Ni X. Application of Deep Learning in Cancer Prognosis Prediction Model. Technol Cancer Res Treat 2023; 22:15330338231199287. [PMID: 37709267 PMCID: PMC10503281 DOI: 10.1177/15330338231199287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/16/2023] Open
Abstract
As an important branch of artificial intelligence and machine learning, deep learning (DL) has been widely used in various aspects of cancer auxiliary diagnosis, among which cancer prognosis is the most important part. High-accuracy cancer prognosis is beneficial to the clinical management of patients with cancer. Compared with other methods, DL models can significantly improve the accuracy of prediction. Therefore, this article is a systematic review of the latest research on DL in cancer prognosis prediction. First, the data type, construction process, and performance evaluation index of the DL model are introduced in detail. Then, the current mainstream baseline DL cancer prognosis prediction models, namely, deep neural networks, convolutional neural networks, deep belief networks, deep residual networks, and vision transformers, including network architectures, the latest application in cancer prognosis, and their respective characteristics, are discussed. Next, some key factors that affect the predictive performance of the model and common performance enhancement techniques are listed. Finally, the limitations of the DL cancer prognosis prediction model in clinical practice are summarized, and the future research direction is prospected. This article could provide relevant researchers with a comprehensive understanding of DL cancer prognostic models and is expected to promote the research progress of cancer prognosis prediction.
Collapse
Affiliation(s)
- Heng Zhang
- Department of Radiotherapy Oncology, Changzhou No.2 People's Hospital, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, China
| | - Qianyi Xi
- Department of Radiotherapy Oncology, Changzhou No.2 People's Hospital, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, China
- School of Microelectronics and Control Engineering, Changzhou University, Changzhou, China
| | - Fan Zhang
- Department of Radiotherapy Oncology, Changzhou No.2 People's Hospital, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, China
- School of Microelectronics and Control Engineering, Changzhou University, Changzhou, China
| | - Qixuan Li
- Department of Radiotherapy Oncology, Changzhou No.2 People's Hospital, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, China
- School of Microelectronics and Control Engineering, Changzhou University, Changzhou, China
| | - Zhuqing Jiao
- School of Microelectronics and Control Engineering, Changzhou University, Changzhou, China
| | - Xinye Ni
- Department of Radiotherapy Oncology, Changzhou No.2 People's Hospital, Nanjing Medical University, Changzhou, China
- Jiangsu Province Engineering Research Center of Medical Physics, Changzhou, China
- Medical Physics Research Center, Nanjing Medical University, Changzhou, China
- Key Laboratory of Medical Physics in Changzhou, Changzhou, China
| |
Collapse
|
26
|
Zhou Z, Huber NR, Inoue A, McCollough CH, Yu L. Multislice input for 2D and 3D residual convolutional neural network noise reduction in CT. J Med Imaging (Bellingham) 2023; 10:014003. [PMID: 36743869 PMCID: PMC9888548 DOI: 10.1117/1.jmi.10.1.014003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 01/09/2023] [Indexed: 02/03/2023] Open
Abstract
Purpose Deep convolutional neural network (CNN)-based methods are increasingly used for reducing image noise in computed tomography (CT). Current attempts at CNN denoising are based on 2D or 3D CNN models with a single- or multiple-slice input. Our work aims to investigate if the multiple-slice input improves the denoising performance compared with the single-slice input and if a 3D network architecture is better than a 2D version at utilizing the multislice input. Approach Two categories of network architectures can be used for the multislice input. First, multislice images can be stacked channel-wise as the multichannel input to a 2D CNN model. Second, multislice images can be employed as the 3D volumetric input to a 3D CNN model, in which the 3D convolution layers are adopted. We make performance comparisons among 2D CNN models with one, three, and seven input slices and two versions of 3D CNN models with seven input slices and one or three output slices. Evaluation was performed on liver CT images using three quantitative metrics with full-dose images as reference. Visual assessment was made by an experienced radiologist. Results When the input channels of the 2D CNN model increases from one to three to seven, a trend of improved performance was observed. Comparing the three models with the seven-slice input, the 3D CNN model with a one-slice output outperforms the other models in terms of noise texture and homogeneity in liver parenchyma as well as subjective visualization of vessels. Conclusions We conclude the that multislice input is an effective strategy for improving performance for 2D deep CNN denoising models. The pure 3D CNN model tends to have a better performance than the other models in terms of continuity across axial slices, but the difference was not significant compared with the 2D CNN model with the same number of slices as the input.
Collapse
Affiliation(s)
- Zhongxing Zhou
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Nathan R. Huber
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Akitoshi Inoue
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | | | - Lifeng Yu
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| |
Collapse
|
27
|
Clipped DeepControl: Deep neural network two-dimensional pulse design with an amplitude constraint layer. Artif Intell Med 2023; 135:102460. [PMID: 36628795 DOI: 10.1016/j.artmed.2022.102460] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 11/18/2022] [Accepted: 11/18/2022] [Indexed: 11/27/2022]
Abstract
Advanced radio-frequency pulse design used in magnetic resonance imaging has recently been demonstrated with deep learning of (convolutional) neural networks and reinforcement learning. For two-dimensionally selective radio-frequency pulses, the (convolutional) neural network pulse prediction time (a few milliseconds) was in comparison more than three orders of magnitude faster than the conventional optimal control computation. The network pulses were from the supervised training capable of compensating scan-subject dependent inhomogeneities of B0 and B1+ fields. Unfortunately, the network presented with a small percentage of pulse amplitude overshoots in the test subset, despite the optimal control pulses used in training were fully constrained. Here, we have extended the convolutional neural network with a custom-made clipping layer that completely eliminates the risk of pulse amplitude overshoots, while preserving the ability to compensate for the inhomogeneous field conditions.
Collapse
|
28
|
Balaha HM, Hassan AES. A variate brain tumor segmentation, optimization, and recognition framework. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10337-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
29
|
Liu Z, Zhu Y, Zhang L, Jiang W, Liu Y, Tang Q, Cai X, Li J, Wang L, Tao C, Yin X, Li X, Hou S, Jiang D, Liu K, Zhou X, Zhang H, Liu M, Fan C, Tian Y. Structural and functional imaging of brains. Sci China Chem 2022; 66:324-366. [PMID: 36536633 PMCID: PMC9753096 DOI: 10.1007/s11426-022-1408-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 09/28/2022] [Indexed: 12/23/2022]
Abstract
Analyzing the complex structures and functions of brain is the key issue to understanding the physiological and pathological processes. Although neuronal morphology and local distribution of neurons/blood vessels in the brain have been known, the subcellular structures of cells remain challenging, especially in the live brain. In addition, the complicated brain functions involve numerous functional molecules, but the concentrations, distributions and interactions of these molecules in the brain are still poorly understood. In this review, frontier techniques available for multiscale structure imaging from organelles to the whole brain are first overviewed, including magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), serial-section electron microscopy (ssEM), light microscopy (LM) and synchrotron-based X-ray microscopy (XRM). Specially, XRM for three-dimensional (3D) imaging of large-scale brain tissue with high resolution and fast imaging speed is highlighted. Additionally, the development of elegant methods for acquisition of brain functions from electrical/chemical signals in the brain is outlined. In particular, the new electrophysiology technologies for neural recordings at the single-neuron level and in the brain are also summarized. We also focus on the construction of electrochemical probes based on dual-recognition strategy and surface/interface chemistry for determination of chemical species in the brain with high selectivity and long-term stability, as well as electrochemophysiological microarray for simultaneously recording of electrochemical and electrophysiological signals in the brain. Moreover, the recent development of brain MRI probes with high contrast-to-noise ratio (CNR) and sensitivity based on hyperpolarized techniques and multi-nuclear chemistry is introduced. Furthermore, multiple optical probes and instruments, especially the optophysiological Raman probes and fiber Raman photometry, for imaging and biosensing in live brain are emphasized. Finally, a brief perspective on existing challenges and further research development is provided.
Collapse
Affiliation(s)
- Zhichao Liu
- Shanghai Key Laboratory of Green Chemistry and Chemical Processes, School of Chemistry and Molecular Engineering, East China Normal University, Shanghai, 200241 China
| | - Ying Zhu
- Interdisciplinary Research Center, Shanghai Synchrotron Radiation Facility, Zhangjiang Laboratory, Shanghai Advanced Research Institute, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, 201210 China
| | - Liming Zhang
- Shanghai Key Laboratory of Green Chemistry and Chemical Processes, School of Chemistry and Molecular Engineering, East China Normal University, Shanghai, 200241 China
| | - Weiping Jiang
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Chinese Academy of Sciences, Wuhan National Laboratory for Optoelectronics, Wuhan, 430071 China
| | - Yawei Liu
- State Key Laboratory of Rare Earth Resource Utilization, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, 130022 China
| | - Qiaowei Tang
- Interdisciplinary Research Center, Shanghai Synchrotron Radiation Facility, Zhangjiang Laboratory, Shanghai Advanced Research Institute, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, 201210 China
| | - Xiaoqing Cai
- Interdisciplinary Research Center, Shanghai Synchrotron Radiation Facility, Zhangjiang Laboratory, Shanghai Advanced Research Institute, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, 201210 China
| | - Jiang Li
- Interdisciplinary Research Center, Shanghai Synchrotron Radiation Facility, Zhangjiang Laboratory, Shanghai Advanced Research Institute, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, 201210 China
| | - Lihua Wang
- Interdisciplinary Research Center, Shanghai Synchrotron Radiation Facility, Zhangjiang Laboratory, Shanghai Advanced Research Institute, Chinese Academy of Sciences, University of Chinese Academy of Sciences, Shanghai, 201210 China
| | - Changlu Tao
- Interdisciplinary Center for Brain Information, Brain Cognition and Brain Disease Institute, Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions, Faculty of Life and Health Sciences, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055 China
| | | | - Xiaowei Li
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240 China
| | - Shangguo Hou
- Institute of Systems and Physical Biology, Shenzhen Bay Laboratory, Shenzhen, 518055 China
| | - Dawei Jiang
- Department of Nuclear Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022 China
| | - Kai Liu
- Department of Chemistry, Tsinghua University, Beijing, 100084 China
| | - Xin Zhou
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Chinese Academy of Sciences, Wuhan National Laboratory for Optoelectronics, Wuhan, 430071 China
| | - Hongjie Zhang
- State Key Laboratory of Rare Earth Resource Utilization, Changchun Institute of Applied Chemistry, Chinese Academy of Sciences, Changchun, 130022 China
- Department of Chemistry, Tsinghua University, Beijing, 100084 China
| | - Maili Liu
- State Key Laboratory of Magnetic Resonance and Atomic and Molecular Physics, Chinese Academy of Sciences, Wuhan National Laboratory for Optoelectronics, Wuhan, 430071 China
| | - Chunhai Fan
- School of Chemistry and Chemical Engineering, Frontiers Science Center for Transformative Molecules, Institute of Translational Medicine, Shanghai Jiao Tong University, Shanghai, 200240 China
| | - Yang Tian
- Shanghai Key Laboratory of Green Chemistry and Chemical Processes, School of Chemistry and Molecular Engineering, East China Normal University, Shanghai, 200241 China
| |
Collapse
|
30
|
Nazari-Farsani S, Yu Y, Duarte Armindo R, Lansberg M, Liebeskind DS, Albers G, Christensen S, Levin CS, Zaharchuk G. Predicting final ischemic stroke lesions from initial diffusion-weighted images using a deep neural network. Neuroimage Clin 2022; 37:103278. [PMID: 36481696 PMCID: PMC9727698 DOI: 10.1016/j.nicl.2022.103278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Revised: 11/20/2022] [Accepted: 11/30/2022] [Indexed: 12/03/2022]
Abstract
BACKGROUND For prognosis of stroke, measurement of the diffusion-perfusion mismatch is a common practice for estimating tissue at risk of infarction in the absence of timely reperfusion. However, perfusion-weighted imaging (PWI) adds time and expense to the acute stroke imaging workup. We explored whether a deep convolutional neural network (DCNN) model trained with diffusion-weighted imaging obtained at admission could predict final infarct volume and location in acute stroke patients. METHODS In 445 patients, we trained and validated an attention-gated (AG) DCNN to predict final infarcts as delineated on follow-up studies obtained 3 to 7 days after stroke. The input channels consisted of MR diffusion-weighted imaging (DWI), apparent diffusion coefficients (ADC) maps, and thresholded ADC maps with values less than 620 × 10-6 mm2/s, while the output was a voxel-by-voxel probability map of tissue infarction. We evaluated performance of the model using the area under the receiver-operator characteristic curve (AUC), the Dice similarity coefficient (DSC), absolute lesion volume error, and the concordance correlation coefficient (ρc) of the predicted and true infarct volumes. RESULTS The model obtained a median AUC of 0.91 (IQR: 0.84-0.96). After thresholding at an infarction probability of 0.5, the median sensitivity and specificity were 0.60 (IQR: 0.16-0.84) and 0.97 (IQR: 0.93-0.99), respectively, while the median DSC and absolute volume error were 0.50 (IQR: 0.17-0.66) and 27 ml (IQR: 7-60 ml), respectively. The model's predicted lesion volumes showed high correlation with ground truth volumes (ρc = 0.73, p < 0.01). CONCLUSION An AG-DCNN using diffusion information alone upon admission was able to predict infarct volumes at 3-7 days after stroke onset with comparable accuracy to models that consider both DWI and PWI. This may enable treatment decisions to be made with shorter stroke imaging protocols.
Collapse
Affiliation(s)
| | - Yannan Yu
- Department of Radiology, Stanford University, CA, USA; Internal Medicine Department, University of Massachusetts Memorial Medical Center, University of Massachusetts, Boston, USA
| | - Rui Duarte Armindo
- Department of Radiology, Stanford University, CA, USA; Department of Neuroradiology, Hospital Beatriz Ângelo, Loures, Lisbon, Portugal
| | | | - David S Liebeskind
- Department of Neurology, University of California Los Angeles, Los Angeles, CA, USA
| | | | | | - Craig S Levin
- Department of Radiology, Stanford University, CA, USA
| | | |
Collapse
|
31
|
How feasible is end-to-end deep learning for clinical neuroimaging? J Neuroradiol 2022; 49:399-400. [DOI: 10.1016/j.neurad.2022.10.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 10/09/2022] [Indexed: 11/06/2022]
|
32
|
Hu Q, Wei Y, Li X, Wang C, Li J, Wang Y. EA-Net: Edge-aware network for brain structure segmentation via decoupled high and low frequency features. Comput Biol Med 2022; 150:106139. [PMID: 36209556 DOI: 10.1016/j.compbiomed.2022.106139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Revised: 09/08/2022] [Accepted: 09/18/2022] [Indexed: 11/21/2022]
Abstract
Automatic brain structure segmentation in Magnetic Resonance Image (MRI) plays an important role in the diagnosis of various neuropsychiatric diseases. However, most existing methods yield unsatisfactory results due to blurred boundaries and complex structures. Improving the segmentation ability requires the model to be explicit about the spatial localization and shape appearance of targets, which correspond to the low-frequency content features and the high-frequency edge features, respectively. Therefore, in this paper, to extract rich edge and content feature representations, we focus on the composition of the feature and utilize a frequency decoupling (FD) block to separate the low-frequency and high-frequency parts of the feature. Further, a novel edge-aware network (EA-Net) is proposed for jointly learning to segment brain structures and detect object edges. First, an encoder-decoder sub-network is utilized to obtain multi-level information from the input MRI, which is then sent to the FD block to complete the frequency separation. Further, we use different mechanisms to optimize both the low-frequency and high-frequency features. Finally, these two parts are fused to generate the final prediction. In particular, we extract the content mask and the edge mask from the optimization feature with different supervisions, which forces the network to learn the boundary features of the object. Extensive experiments are performed on two public brain MRI T1 scan datasets (the IBSR dataset and the MALC dataset) to evaluate the effectiveness of the proposed algorithm. The experiments show that the EA-Net achieves the best performance compared with the state-of-the-art methods, and improves the segmentation DSC score by 1.31% at most compared with the U-Net model and its variants. Moreover, we evaluate the EA-Net under different noise disturbances, and the results demonstrate the robustness and superiority of our method under low-quality noisy MRI. Code is available at https://github.com/huqian999/EA-Net.
Collapse
Affiliation(s)
- Qian Hu
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
| | - Ying Wei
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China; Information Technology R&D Innovation Center of Peking University, Shaoxing, China; Changsha Hisense Intelligent System Research Institute Co., Ltd., China.
| | - Xiang Li
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
| | - Chuyuan Wang
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
| | - Jiaguang Li
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
| | - Yuefeng Wang
- College of Information Science and Engineering, Northeastern University, Shenyang 110819, China.
| |
Collapse
|
33
|
Gruber N, Galijasevic M, Regodic M, Grams AE, Siedentopf C, Steiger R, Hammerl M, Haltmeier M, Gizewski ER, Janjic T. A deep learning pipeline for the automated segmentation of posterior limb of internal capsule in preterm neonates. Artif Intell Med 2022; 132:102384. [DOI: 10.1016/j.artmed.2022.102384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 08/08/2022] [Accepted: 08/19/2022] [Indexed: 11/15/2022]
|
34
|
Zhao Y, Kang Z, Chen L, Guo Y, Mu Q, Wang S, Zhao B, Feng C. Quality classification of kiwifruit under different storage conditions based on deep learning and hyperspectral imaging technology. JOURNAL OF FOOD MEASUREMENT AND CHARACTERIZATION 2022. [DOI: 10.1007/s11694-022-01554-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
35
|
Region Convolutional Neural Network for Brain Tumor Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8335255. [PMID: 36124122 PMCID: PMC9482475 DOI: 10.1155/2022/8335255] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 07/20/2022] [Accepted: 07/22/2022] [Indexed: 11/17/2022]
Abstract
Gliomas are often difficult to find and distinguish using typical manual segmentation approaches because of their vast range of changes in size, shape, and appearance. Furthermore, the manual annotation of cancer tissue segmentation under the close supervision of a human professional is both time-consuming and exhausting to perform. It will be easier and faster in the future to get accurate and quick diagnoses and treatments thanks to automated segmentation and survival rate prediction models that can be used now. In this article, a segmentation model is designed using RCNN that enables automatic prognosis on brain tumors using MRI. The study adopts a U-Net encoder for capturing the features during the training of the model. The feature extraction extracts geometric features for the estimation of tumor size. It is seen that the shape, location, and size of a tumor are significant factors in the estimation of prognosis. The experimental methods are conducted to test the efficacy of the model, and the results of the simulation show that the proposed method achieves a reduced error rate with increased accuracy than other methods.
Collapse
|
36
|
Koteswara Rao Chinnam S, Sistla V, Krishna Kishore Kolli V. Multimodal attention-gated cascaded U-Net model for automatic brain tumor detection and segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
37
|
Fully Convolutional Neural Network for Improved Brain Segmentation. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07169-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
38
|
Deep learning models and traditional automated techniques for brain tumor segmentation in MRI: a review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10245-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
39
|
Park S, Ahn S, Kim JY, Kim J, Han HJ, Hwang D, Park J, Park HS, Park S, Kim GM, Sohn J, Jeong J, Song YU, Lee H, Kim SI. Blood Test for Breast Cancer Screening through the Detection of Tumor-Associated Circulating Transcripts. Int J Mol Sci 2022; 23:ijms23169140. [PMID: 36012405 PMCID: PMC9409068 DOI: 10.3390/ijms23169140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 08/05/2022] [Accepted: 08/10/2022] [Indexed: 11/29/2022] Open
Abstract
Liquid biopsy has been emerging for early screening and treatment monitoring at each cancer stage. However, the current blood-based diagnostic tools in breast cancer have not been sufficient to understand patient-derived molecular features of aggressive tumors individually. Herein, we aimed to develop a blood test for the early detection of breast cancer with cost-effective and high-throughput considerations in order to combat the challenges associated with precision oncology using mRNA-based tests. We prospectively evaluated 719 blood samples from 404 breast cancer patients and 315 healthy controls, and identified 10 mRNA transcripts whose expression is increased in the blood of breast cancer patients relative to healthy controls. Modeling of the tumor-associated circulating transcripts (TACTs) is performed by means of four different machine learning techniques (artificial neural network (ANN), decision tree (DT), logistic regression (LR), and support vector machine (SVM)). The ANN model had superior sensitivity (90.2%), specificity (80.0%), and accuracy (85.7%) compared with the other three models. Relative to the value of 90.2% achieved using the TACT assay on our test set, the sensitivity values of other conventional assays (mammogram, CEA, and CA 15-3) were comparable or much lower, at 89%, 7%, and 5%, respectively. The sensitivity, specificity, and accuracy of TACTs were appreciably consistent across the different breast cancer stages, suggesting the potential of the TACTs assay as an early diagnosis and prediction of poor outcomes. Our study potentially paves the way for a simple and accurate diagnostic and prognostic tool for liquid biopsy.
Collapse
Affiliation(s)
- Sunyoung Park
- Department of Biomedical Laboratory Science, College of Health Sciences, Yonsei University, Wonju 26493, Korea
| | - Sungwoo Ahn
- Department of Biomedical Laboratory Science, College of Health Sciences, Yonsei University, Wonju 26493, Korea
| | - Jee Ye Kim
- Department of Surgery, Yonsei University College of Medicine, Seoul 03722, Korea
| | - Jungho Kim
- Department of Clinical Laboratory Science, College of Health Sciences, Catholic University of Pusan, Busan 46252, Korea
| | - Hyun Ju Han
- Avison Biomedical Research Center, Yonsei University College of Medicine, Seoul 03722, Korea
| | - Dasom Hwang
- Department of Biomedical Laboratory Science, College of Health Sciences, Yonsei University, Wonju 26493, Korea
| | - Jungmin Park
- Department of Surgery, Yonsei University College of Medicine, Seoul 03722, Korea
| | - Hyung Seok Park
- Department of Surgery, Yonsei University College of Medicine, Seoul 03722, Korea
| | - Seho Park
- Department of Surgery, Yonsei University College of Medicine, Seoul 03722, Korea
| | - Gun Min Kim
- Department of Medical Oncology, Yonsei University College of Medicine, Seoul 03722, Korea
| | - Joohyuk Sohn
- Department of Medical Oncology, Yonsei University College of Medicine, Seoul 03722, Korea
| | - Joon Jeong
- Department of Surgery, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul 06273, Korea
| | - Yong Uk Song
- Division of Business Administration, College of Government and Business, Yonsei University, Wonju 26493, Korea
| | - Hyeyoung Lee
- Department of Biomedical Laboratory Science, College of Health Sciences, Yonsei University, Wonju 26493, Korea
- Correspondence: (H.L.); (S.I.K.)
| | - Seung Il Kim
- Department of Surgery, Yonsei University College of Medicine, Seoul 03722, Korea
- Correspondence: (H.L.); (S.I.K.)
| |
Collapse
|
40
|
A novel automatic approach for glioma segmentation. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07583-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
41
|
Abstract
AbstractBrain tumor segmentation is one of the most challenging problems in medical image analysis. The goal of brain tumor segmentation is to generate accurate delineation of brain tumor regions. In recent years, deep learning methods have shown promising performance in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of deep learning based methods have been applied to brain tumor segmentation and achieved promising results. Considering the remarkable breakthroughs made by state-of-the-art technologies, we provide this survey with a comprehensive study of recently developed deep learning based brain tumor segmentation techniques. More than 150 scientific papers are selected and discussed in this survey, extensively covering technical aspects such as network architecture design, segmentation under imbalanced conditions, and multi-modality processes. We also provide insightful discussions for future development directions.
Collapse
|
42
|
Brain Tumor Segmentation Using Deep Capsule Network and Latent-Dynamic Conditional Random Fields. J Imaging 2022; 8:jimaging8070190. [PMID: 35877634 PMCID: PMC9322984 DOI: 10.3390/jimaging8070190] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 06/30/2022] [Accepted: 07/05/2022] [Indexed: 02/04/2023] Open
Abstract
Because of the large variabilities in brain tumors, automating segmentation remains a difficult task. We propose an automated method to segment brain tumors by integrating the deep capsule network (CapsNet) and the latent-dynamic condition random field (LDCRF). The method consists of three main processes to segment the brain tumor—pre-processing, segmentation, and post-processing. In pre-processing, the N4ITK process involves correcting each MR image’s bias field before normalizing the intensity. After that, image patches are used to train CapsNet during the segmentation process. Then, with the CapsNet parameters determined, we employ image slices from an axial view to learn the LDCRF-CapsNet. Finally, we use a simple thresholding method to correct the labels of some pixels and remove small 3D-connected regions from the segmentation outcomes. On the BRATS 2015 and BRATS 2021 datasets, we trained and evaluated our method and discovered that it outperforms and can compete with state-of-the-art methods in comparable conditions.
Collapse
|
43
|
Bayesian Depth-Wise Convolutional Neural Network Design for Brain Tumor MRI Classification. Diagnostics (Basel) 2022; 12:diagnostics12071657. [PMID: 35885560 PMCID: PMC9320360 DOI: 10.3390/diagnostics12071657] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 06/30/2022] [Accepted: 07/04/2022] [Indexed: 11/17/2022] Open
Abstract
In recent years, deep learning has been applied to many medical imaging fields, including medical image processing, bioinformatics, medical image classification, segmentation, and prediction tasks. Computer-aided detection systems have been widely adopted in brain tumor classification, prediction, detection, diagnosis, and segmentation tasks. This work proposes a novel model that combines the Bayesian algorithm with depth-wise separable convolutions for accurate classification and predictions of brain tumors. We combine Bayesian modeling learning and Convolutional Neural Network learning methods for accurate prediction results to provide the radiologists the means to classify the Magnetic Resonance Imaging (MRI) images rapidly. After thorough experimental analysis, our proposed model outperforms other state-of-the-art models in terms of validation accuracy, training accuracy, F1-score, recall, and precision. Our model obtained high performances of 99.03% training accuracy and 94.32% validation accuracy, F1-score, precision, and recall values of 0.94, 0.95, and 0.94, respectively. To the best of our knowledge, the proposed work is the first neural network model that combines the hybrid effect of depth-wise separable convolutions with the Bayesian algorithm using encoders.
Collapse
|
44
|
Li X, Wei Y, Wang C, Hu Q, Liu C. Contextual-wise discriminative feature extraction and robust network learning for subcortical structure segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03848-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
45
|
Baxter JSH, Jannin P. Combining simple interactivity and machine learning: a separable deep learning approach to subthalamic nucleus localization and segmentation in MRI for deep brain stimulation surgical planning. J Med Imaging (Bellingham) 2022; 9:045001. [PMID: 35836671 DOI: 10.1117/1.jmi.9.4.045001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 06/16/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Deep brain stimulation (DBS) is an interventional treatment for some neurological and neurodegenerative diseases. For example, in Parkinson's disease, DBS electrodes are positioned at particular locations within the basal ganglia to alleviate the patient's motor symptoms. These interventions depend greatly on a preoperative planning stage in which potential targets and electrode trajectories are identified in a preoperative MRI. Due to the small size and low contrast of targets such as the subthalamic nucleus (STN), their segmentation is a difficult task. Machine learning provides a potential avenue for development, but it has difficulty in segmenting such small structures in volumetric images due to additional problems such as segmentation class imbalance. Approach: We present a two-stage separable learning workflow for STN segmentation consisting of a localization step that detects the STN and crops the image to a small region and a segmentation step that delineates the structure within that region. The goal of this decoupling is to improve accuracy and efficiency and to provide an intermediate representation that can be easily corrected by a clinical user. This correction capability was then studied through a human-computer interaction experiment with seven novice participants and one expert neurosurgeon. Results: Our two-step segmentation significantly outperforms the comparative registration-based method currently used in clinic and approaches the fundamental limit on variability due to the image resolution. In addition, the human-computer interaction experiment shows that the additional interaction mechanism allowed by separating STN segmentation into two steps significantly improves the users' ability to correct errors and further improves performance. Conclusions: Our method shows that separable learning not only is feasible for fully automatic STN segmentation but also leads to improved interactivity that can ease its translation into clinical use.
Collapse
Affiliation(s)
- John S H Baxter
- Université de Rennes 1, Laboratoire Traitement du Signal et de l'Image (INSERM UMR 1099), Rennes, France
| | - Pierre Jannin
- Université de Rennes 1, Laboratoire Traitement du Signal et de l'Image (INSERM UMR 1099), Rennes, France
| |
Collapse
|
46
|
Kawahara K, Ishikawa R, Sasano S, Shibata N, Ikuhara Y. Atomic-Resolution STEM Image Denoising by Total Variation Regularization. Microscopy (Oxf) 2022; 71:302-310. [PMID: 35713554 DOI: 10.1093/jmicro/dfac032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2021] [Revised: 05/31/2022] [Accepted: 06/16/2022] [Indexed: 11/13/2022] Open
Abstract
Atomic-resolution electron microscopy imaging of solid state material is a powerful method for structural analysis. Scanning transmission electron microscopy (STEM) is one of the actively used techniques to directly observe atoms in materials. However, some materials are easily damaged by the electron beam irradiation, and only noisy images are available when we decrease the electron dose to avoid beam damages. Therefore, a denoising process is necessary for precise structural analysis in low-dose STEM. In this study, we propose total variation (TV) denoising algorithm to remove quantum noise in a STEM image. We defined an entropy of STEM image that corresponds to the image contrast to determine a hyperparameter and we found that there is a hyperparameter that maximize the entropy. We acquired atomic resolution STEM image of CaF2 viewed along the [001] direction, and executed TV denoising. The atomic columns of Ca and F are clearly visualized by the TV denoising, and atomic position of Ca and F are determined with the error of ± 1 pm and ± 4 pm, respectively.
Collapse
Affiliation(s)
- Kazuaki Kawahara
- Institute of Engineering Innovation, The University of Tokyo, Bunkyo, Tokyo 113-8656, Japan
| | - Ryo Ishikawa
- Institute of Engineering Innovation, The University of Tokyo, Bunkyo, Tokyo 113-8656, Japan
| | - Shun Sasano
- Institute of Engineering Innovation, The University of Tokyo, Bunkyo, Tokyo 113-8656, Japan
| | - Naoya Shibata
- Institute of Engineering Innovation, The University of Tokyo, Bunkyo, Tokyo 113-8656, Japan.,Nanostructures Research Laboratory, Japan Fine Ceramics Center, Atsuta, Nagoya 456-8587, Japan
| | - Yuichi Ikuhara
- Institute of Engineering Innovation, The University of Tokyo, Bunkyo, Tokyo 113-8656, Japan.,Nanostructures Research Laboratory, Japan Fine Ceramics Center, Atsuta, Nagoya 456-8587, Japan
| |
Collapse
|
47
|
SAR Image Fusion Classification Based on the Decision-Level Combination of Multi-Band Information. REMOTE SENSING 2022. [DOI: 10.3390/rs14092243] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Synthetic aperture radar (SAR) is an active coherent microwave remote sensing system. SAR systems working in different bands have different imaging results for the same area, resulting in different advantages and limitations for SAR image classification. Therefore, to synthesize the classification information of SAR images into different bands, an SAR image fusion classification method based on the decision-level combination of multi-band information is proposed in this paper. Within the proposed method, the idea of Dempster–Shafer evidence theory is introduced to model the uncertainty of the classification result of each pixel and used to combine the classification results of multiple band SAR images. The convolutional neural network is used to classify single-band SAR images. Calculate the belief entropy of each pixel to measure the uncertainty of single-band classification, and generate the basic probability assignment function. The idea of the term frequency-inverse document frequency in natural language processing is combined with the conflict coefficient to obtain the weight of different bands. Meanwhile, the neighborhood classification of each pixel in different band sensors is considered to obtain the total weight of each band sensor, generate weighted average BPA, and obtain the final ground object classification result after fusion. The validity of the proposed method is verified in two groups of multi-band SAR image classification experiments, and the proposed method has effectively improved the accuracy compared to the modified average approach.
Collapse
|
48
|
Balwant M. A Review on Convolutional Neural Networks for Brain Tumor Segmentation: Methods, Datasets, Libraries, and Future Directions. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
49
|
Wu L, Hu S, Liu C. MR brain segmentation based on DE-ResUnet combining texture features and background knowledge. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103541] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
50
|
Deep Transfer Learning for Automatic Prediction of Hemorrhagic Stroke on CT Images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:3560507. [PMID: 35469220 PMCID: PMC9034929 DOI: 10.1155/2022/3560507] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 03/29/2022] [Indexed: 11/21/2022]
Abstract
Intracerebral hemorrhage (ICH) is the most common type of hemorrhagic stroke which occurs due to ruptures of weakened blood vessel in brain tissue. It is a serious medical emergency issues that needs immediate treatment. Large numbers of noncontrast-computed tomography (NCCT) brain images are analyzed manually by radiologists to diagnose the hemorrhagic stroke, which is a difficult and time-consuming process. In this study, we propose an automated transfer deep learning method that combines ResNet-50 and dense layer for accurate prediction of intracranial hemorrhage on NCCT brain images. A total of 1164 NCCT brain images were collected from 62 patients with hemorrhagic stroke from Kalinga Institute of Medical Science, Bhubaneswar and used for evaluating the model. The proposed model takes individual CT images as input and classifies them as hemorrhagic or normal. This deep transfer learning approach reached 99.6% accuracy, 99.7% specificity, and 99.4% sensitivity which are better results than that of ResNet-50 only. It is evident that the deep transfer learning model has advantages for automatic diagnosis of hemorrhagic stroke and has the potential to be used as a clinical decision support tool to assist radiologists in stroke diagnosis.
Collapse
|