1
|
Fisch L, Zumdick S, Barkhau C, Emden D, Ernsting J, Leenings R, Sarink K, Winter NR, Risse B, Dannlowski U, Hahn T. deepbet: Fast brain extraction of T1-weighted MRI using Convolutional Neural Networks. Comput Biol Med 2024; 179:108845. [PMID: 39002314 DOI: 10.1016/j.compbiomed.2024.108845] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 07/01/2024] [Accepted: 07/02/2024] [Indexed: 07/15/2024]
Abstract
BACKGROUND Brain extraction in magnetic resonance imaging (MRI) data is an important segmentation step in many neuroimaging preprocessing pipelines. Image segmentation is one of the research fields in which deep learning had the biggest impact in recent years. Consequently, traditional brain extraction methods are now being replaced by deep learning-based methods. METHOD Here, we used a unique dataset compilation comprising 7837 T1-weighted (T1w) MR images from 191 different OpenNeuro datasets in combination with advanced deep learning methods to build a fast, high-precision brain extraction tool called deepbet. RESULTS deepbet sets a novel state-of-the-art performance during cross-dataset validation with a median Dice score (DSC) of 99.0 on unseen datasets, outperforming the current best performing deep learning (DSC=97.9) and classic (DSC=96.5) methods. While current methods are more sensitive to outliers, deepbet achieves a Dice score of >97.4 across all 7837 images from 191 different datasets. This robustness was additionally tested in 5 external datasets, which included challenging clinical MR images. During visual exploration of each method's output which resulted in the lowest Dice score, major errors could be found for all of the tested tools except deepbet. Finally, deepbet uses a compute efficient variant of the UNet architecture, which accelerates brain extraction by a factor of ≈10 compared to current methods, enabling the processing of one image in ≈2 s on low level hardware. CONCLUSIONS In conclusion, deepbet demonstrates superior performance and reliability in brain extraction across a wide range of T1w MR images of adults, outperforming existing top tools. Its high minimal Dice score and minimal objective errors, even in challenging conditions, validate deepbet as a highly dependable tool for accurate brain extraction. deepbet can be conveniently installed via "pip install deepbet" and is publicly accessible at https://github.com/wwu-mmll/deepbet.
Collapse
Affiliation(s)
- Lukas Fisch
- University of Münster, Institute for Translational Psychiatry, Münster, Germany.
| | - Stefan Zumdick
- University of Münster, Institute for Translational Psychiatry, Münster, Germany
| | - Carlotta Barkhau
- University of Münster, Institute for Translational Psychiatry, Münster, Germany
| | - Daniel Emden
- University of Münster, Institute for Translational Psychiatry, Münster, Germany
| | - Jan Ernsting
- University of Münster, Institute for Translational Psychiatry, Münster, Germany; Department of Mathematics and Computer Science, University of Münster, Münster, Germany
| | - Ramona Leenings
- University of Münster, Institute for Translational Psychiatry, Münster, Germany
| | - Kelvin Sarink
- University of Münster, Institute for Translational Psychiatry, Münster, Germany
| | - Nils R Winter
- University of Münster, Institute for Translational Psychiatry, Münster, Germany
| | - Benjamin Risse
- Department of Mathematics and Computer Science, University of Münster, Münster, Germany
| | - Udo Dannlowski
- University of Münster, Institute for Translational Psychiatry, Münster, Germany
| | - Tim Hahn
- University of Münster, Institute for Translational Psychiatry, Münster, Germany
| |
Collapse
|
2
|
Faghihpirayesh R, Karimi D, Erdoğmuş D, Gholipour A. Fetal-BET: Brain Extraction Tool for Fetal MRI. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:551-562. [PMID: 39157057 PMCID: PMC11329220 DOI: 10.1109/ojemb.2024.3426969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 05/09/2024] [Accepted: 07/07/2024] [Indexed: 08/20/2024] Open
Abstract
Goal: In this study, we address the critical challenge of fetal brain extraction from MRI sequences. Fetal MRI has played a crucial role in prenatal neurodevelopmental studies and in advancing our knowledge of fetal brain development in-utero. Fetal brain extraction is a necessary first step in most computational fetal brain MRI pipelines. However, it poses significant challenges due to 1) non-standard fetal head positioning, 2) fetal movements during examination, and 3) vastly heterogeneous appearance of the developing fetal brain and the neighboring fetal and maternal anatomy across gestation, and with various sequences and scanning conditions. Development of a machine learning method to effectively address this task requires a large and rich labeled dataset that has not been previously available. Currently, there is no method for accurate fetal brain extraction on various fetal MRI sequences. Methods: In this work, we first built a large annotated dataset of approximately 72,000 2D fetal brain MRI images. Our dataset covers the three common MRI sequences including T2-weighted, diffusion-weighted, and functional MRI acquired with different scanners. These data include images of normal and pathological brains. Using this dataset, we developed and validated deep learning methods, by exploiting the power of the U-Net style architectures, the attention mechanism, feature learning across multiple MRI modalities, and data augmentation for fast, accurate, and generalizable automatic fetal brain extraction. Results: Evaluations on independent test data, including data available from other centers, show that our method achieves accurate brain extraction on heterogeneous test data acquired with different scanners, on pathological brains, and at various gestational stages. Conclusions:By leveraging rich information from diverse multi-modality fetal MRI data, our proposed deep learning solution enables precise delineation of the fetal brain on various fetal MRI sequences. The robustness of our deep learning model underscores its potential utility for fetal brain imaging.
Collapse
Affiliation(s)
- Razieh Faghihpirayesh
- Electrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| | - Davood Karimi
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| | - Deniz Erdoğmuş
- Electrical and Computer Engineering DepartmentNortheastern UniversityBostonMA02115USA
| | - Ali Gholipour
- Radiology DepartmentBoston Children's Hospital, and Harvard Medical SchoolBostonMA02115USA
| |
Collapse
|
3
|
Valverde S, Coll L, Valencia L, Clèrigues A, Oliver A, Vilanova JC, Ramió-Torrentà L, Rovira À, Lladó X. Assessing the Accuracy and Reproducibility of PARIETAL: A Deep Learning Brain Extraction Algorithm. J Magn Reson Imaging 2024; 59:1991-2000. [PMID: 34137113 DOI: 10.1002/jmri.27776] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 05/31/2021] [Accepted: 06/01/2021] [Indexed: 01/18/2023] Open
Abstract
BACKGROUND Manual brain extraction from magnetic resonance (MR) images is time-consuming and prone to intra- and inter-rater variability. Several automated approaches have been developed to alleviate these constraints, including deep learning pipelines. However, these methods tend to reduce their performance in unseen magnetic resonance imaging (MRI) scanner vendors and different imaging protocols. PURPOSE To present and evaluate for clinical use PARIETAL, a pre-trained deep learning brain extraction method. We compare its reproducibility in a scan/rescan analysis and its robustness among scanners of different manufacturers. STUDY TYPE Retrospective. POPULATION Twenty-one subjects (12 women) with age range 22-48 years acquired using three different MRI scanner machines including scan/rescan in each of them. FIELD STRENGTH/SEQUENCE T1-weighted images acquired in a 3-T Siemens with magnetization prepared rapid gradient-echo sequence and two 1.5 T scanners, Philips and GE, with spin-echo and spoiled gradient-recalled (SPGR) sequences, respectively. ASSESSMENT Analysis of the intracranial cavity volumes obtained for each subject on the three different scanners and the scan/rescan acquisitions. STATISTICAL TESTS Parametric permutation tests of the differences in volumes to rank and statistically evaluate the performance of PARIETAL compared to state-of-the-art methods. RESULTS The mean absolute intracranial volume differences obtained by PARIETAL in the scan/rescan analysis were 1.88 mL, 3.91 mL, and 4.71 mL for Siemens, GE, and Philips scanners, respectively. PARIETAL was the best-ranked method on Siemens and GE scanners, while decreasing to Rank 2 on the Philips images. Intracranial differences for the same subject between scanners were 5.46 mL, 27.16 mL, and 30.44 mL for GE/Philips, Siemens/Philips, and Siemens/GE comparison, respectively. The permutation tests revealed that PARIETAL was always in Rank 1, obtaining the most similar volumetric results between scanners. DATA CONCLUSION PARIETAL accurately segments the brain and it generalizes to images acquired at different sites without the need of training or fine-tuning it again. PARIETAL is publicly available. LEVEL OF EVIDENCE 2 TECHNICAL EFFICACY STAGE: 2.
Collapse
Affiliation(s)
- Sergi Valverde
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Llucia Coll
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Liliana Valencia
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Albert Clèrigues
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
| | - Arnau Oliver
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
- REEM, Red Española de Esclerosis Múltiple
| | | | - Lluís Ramió-Torrentà
- REEM, Red Española de Esclerosis Múltiple
- Multiple Sclerosis and Neuroimmunology Unit, Neurology Department, Dr. Josep Trueta University Hospital, Institut d'Investigació Biomèdica, Girona, Spain
- Medical Sciences Department, University of Girona, Girona, Spain
| | - Àlex Rovira
- Magnetic Resonance Unit, Department of Radiology, Vall d'Hebron University Hospital, Barcelona, Spain
| | - Xavier Lladó
- Research Institute of Computer Vision and Robotics, University of Girona, Girona, Spain
- REEM, Red Española de Esclerosis Múltiple
| |
Collapse
|
4
|
Chen JV, Li Y, Tang F, Chaudhari G, Lew C, Lee A, Rauschecker AM, Haskell-Mendoza AP, Wu YW, Calabrese E. Automated neonatal nnU-Net brain MRI extractor trained on a large multi-institutional dataset. Sci Rep 2024; 14:4583. [PMID: 38403673 PMCID: PMC10894871 DOI: 10.1038/s41598-024-54436-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 02/13/2024] [Indexed: 02/27/2024] Open
Abstract
Brain extraction, or skull-stripping, is an essential data preprocessing step for machine learning approaches to brain MRI analysis. Currently, there are limited extraction algorithms for the neonatal brain. We aim to adapt an established deep learning algorithm for the automatic segmentation of neonatal brains from MRI, trained on a large multi-institutional dataset for improved generalizability across image acquisition parameters. Our model, ANUBEX (automated neonatal nnU-Net brain MRI extractor), was designed using nnU-Net and was trained on a subset of participants (N = 433) enrolled in the High-dose Erythropoietin for Asphyxia and Encephalopathy (HEAL) study. We compared the performance of our model to five publicly available models (BET, BSE, CABINET, iBEATv2, ROBEX) across conventional and machine learning methods, tested on two public datasets (NIH and dHCP). We found that our model had a significantly higher Dice score on the aggregate of both data sets and comparable or significantly higher Dice scores on the NIH (low-resolution) and dHCP (high-resolution) datasets independently. ANUBEX performs similarly when trained on sequence-agnostic or motion-degraded MRI, but slightly worse on preterm brains. In conclusion, we created an automatic deep learning-based neonatal brain extraction algorithm that demonstrates accurate performance with both high- and low-resolution MRIs with fast computation time.
Collapse
Affiliation(s)
- Joshua V Chen
- Department of Radiology, University of California San Francisco, San Francisco, CA, USA
| | - Yi Li
- Department of Radiology, University of California San Francisco, San Francisco, CA, USA
| | - Felicia Tang
- Department of Radiology, University of California San Francisco, San Francisco, CA, USA
| | - Gunvant Chaudhari
- Department of Radiology, University of California San Francisco, San Francisco, CA, USA
| | - Christopher Lew
- Division of Neuroradiology, Department of Radiology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Amanda Lee
- Division of Neuroradiology, Department of Radiology, Duke University Medical Center, Durham, NC, 27710, USA
| | - Andreas M Rauschecker
- Department of Radiology, University of California San Francisco, San Francisco, CA, USA
| | | | - Yvonne W Wu
- University of California San Francisco Weill Institute for Neurosciences, San Francisco, CA, USA
| | - Evan Calabrese
- Division of Neuroradiology, Department of Radiology, Duke University Medical Center, Durham, NC, 27710, USA.
- Duke Center for Artificial Intelligence in Radiology (DAIR), Durham, NC, USA.
| |
Collapse
|
5
|
Park JS, Fadnavis S, Garyfallidis E. Multi-scale V-net architecture with deep feature CRF layers for brain extraction. COMMUNICATIONS MEDICINE 2024; 4:29. [PMID: 38396078 PMCID: PMC10891085 DOI: 10.1038/s43856-024-00452-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 02/06/2024] [Indexed: 02/25/2024] Open
Abstract
BACKGROUND Brain extraction is a computational necessity for researchers using brain imaging data. However, the complex structure of the interfaces between the brain, meninges and human skull have not allowed a highly robust solution to emerge. While previous methods have used machine learning with structural and geometric priors in mind, with the development of Deep Learning (DL), there has been an increase in Neural Network based methods. Most proposed DL models focus on improving the training data despite the clear gap between groups in the amount and quality of accessible training data between. METHODS We propose an architecture we call Efficient V-net with Additional Conditional Random Field Layers (EVAC+). EVAC+ has 3 major characteristics: (1) a smart augmentation strategy that improves training efficiency, (2) a unique way of using a Conditional Random Fields Recurrent Layer that improves accuracy and (3) an additional loss function that fine-tunes the segmentation output. We compare our model to state-of-the-art non-DL and DL methods. RESULTS Results show that even with limited training resources, EVAC+ outperforms in most cases, achieving a high and stable Dice Coefficient and Jaccard Index along with a desirable lower Surface (Hausdorff) Distance. More importantly, our approach accurately segmented clinical and pediatric data, despite the fact that the training dataset only contains healthy adults. CONCLUSIONS Ultimately, our model provides a reliable way of accurately reducing segmentation errors in complex multi-tissue interfacing areas of the brain. We expect our method, which is publicly available and open-source, to be beneficial to a wide range of researchers.
Collapse
Affiliation(s)
- Jong Sung Park
- Intelligent Systems Engineering, Indiana University Bloomington, Bloomington, IN, USA.
| | - Shreyas Fadnavis
- Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | | |
Collapse
|
6
|
Zhang J, Cui Z, Zhou L, Sun Y, Li Z, Liu Z, Shen D. Breast Fibroglandular Tissue Segmentation for Automated BPE Quantification With Iterative Cycle-Consistent Semi-Supervised Learning. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3944-3955. [PMID: 37756174 DOI: 10.1109/tmi.2023.3319646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/29/2023]
Abstract
Background Parenchymal Enhancement (BPE) quantification in Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) plays a pivotal role in clinical breast cancer diagnosis and prognosis. However, the emerging deep learning-based breast fibroglandular tissue segmentation, a crucial step in automated BPE quantification, often suffers from limited training samples with accurate annotations. To address this challenge, we propose a novel iterative cycle-consistent semi-supervised framework to leverage segmentation performance by using a large amount of paired pre-/post-contrast images without annotations. Specifically, we design the reconstruction network, cascaded with the segmentation network, to learn a mapping from the pre-contrast images and segmentation predictions to the post-contrast images. Thus, we can implicitly use the reconstruction task to explore the inter-relationship between these two-phase images, which in return guides the segmentation task. Moreover, the reconstructed post-contrast images across multiple auto-context modeling-based iterations can be viewed as new augmentations, facilitating cycle-consistent constraints across each segmentation output. Extensive experiments on two datasets with various data distributions show great segmentation and BPE quantification accuracy compared with other state-of-the-art semi-supervised methods. Importantly, our method achieves 11.80 times of quantification accuracy improvement along with 10 times faster, compared with clinical physicians, demonstrating its potential for automated BPE quantification. The code is available at https://github.com/ZhangJD-ong/Iterative-Cycle-consistent-Semi-supervised-Learning-for-fibroglandular-tissue-segmentation.
Collapse
|
7
|
Du W, Yin K, Shi J. Dimensionality Reduction Hybrid U-Net for Brain Extraction in Magnetic Resonance Imaging. Brain Sci 2023; 13:1549. [PMID: 38002509 PMCID: PMC10669566 DOI: 10.3390/brainsci13111549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2023] [Revised: 10/31/2023] [Accepted: 11/02/2023] [Indexed: 11/26/2023] Open
Abstract
In various applications, such as disease diagnosis, surgical navigation, human brain atlas analysis, and other neuroimage processing scenarios, brain extraction is typically regarded as the initial stage in MRI image processing. Whole-brain semantic segmentation algorithms, such as U-Net, have demonstrated the ability to achieve relatively satisfactory results even with a limited number of training samples. In order to enhance the precision of brain semantic segmentation, various frameworks have been developed, including 3D U-Net, slice U-Net, and auto-context U-Net. However, the processing methods employed in these models are relatively complex when applied to 3D data models. In this article, we aim to reduce the complexity of the model while maintaining appropriate performance. As an initial step to enhance segmentation accuracy, the preprocessing extraction of full-scale information from magnetic resonance images is performed with a cluster tool. Subsequently, three multi-input hybrid U-Net model frameworks are tested and compared. Finally, we propose utilizing a fusion of two-dimensional segmentation outcomes from different planes to attain improved results. The performance of the proposed framework was tested using publicly accessible benchmark datasets, namely LPBA40, in which we obtained Dice overlap coefficients of 98.05%. Improvement was achieved via our algorithm against several previous studies.
Collapse
Affiliation(s)
- Wentao Du
- Nanjing Research Institute of Electronic Technology, Nanjing 210019, China;
| | - Kuiying Yin
- Nanjing Research Institute of Electronic Technology, Nanjing 210019, China;
| | - Jingping Shi
- Department of Neurology, The Affiliated Brain Hospital of Nanjing Medical University, Nanjing 210029, China;
| |
Collapse
|
8
|
Ciceri T, Squarcina L, Giubergia A, Bertoldo A, Brambilla P, Peruzzo D. Review on deep learning fetal brain segmentation from Magnetic Resonance images. Artif Intell Med 2023; 143:102608. [PMID: 37673558 DOI: 10.1016/j.artmed.2023.102608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Brain segmentation is often the first and most critical step in quantitative analysis of the brain for many clinical applications, including fetal imaging. Different aspects challenge the segmentation of the fetal brain in magnetic resonance imaging (MRI), such as the non-standard position of the fetus owing to his/her movements during the examination, rapid brain development, and the limited availability of imaging data. In recent years, several segmentation methods have been proposed for automatically partitioning the fetal brain from MR images. These algorithms aim to define regions of interest with different shapes and intensities, encompassing the entire brain, or isolating specific structures. Deep learning techniques, particularly convolutional neural networks (CNNs), have become a state-of-the-art approach in the field because they can provide reliable segmentation results over heterogeneous datasets. Here, we review the deep learning algorithms developed in the field of fetal brain segmentation and categorize them according to their target structures. Finally, we discuss the perceived research gaps in the literature of the fetal domain, suggesting possible future research directions that could impact the management of fetal MR images.
Collapse
Affiliation(s)
- Tommaso Ciceri
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy; Department of Information Engineering, University of Padua, Padua, Italy
| | - Letizia Squarcina
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| | - Alice Giubergia
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy; Department of Information Engineering, University of Padua, Padua, Italy
| | - Alessandra Bertoldo
- Department of Information Engineering, University of Padua, Padua, Italy; University of Padua, Padova Neuroscience Center, Padua, Italy
| | - Paolo Brambilla
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy; Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, Milan, Italy.
| | - Denis Peruzzo
- NeuroImaging Laboratory, Scientific Institute IRCCS Eugenio Medea, Bosisio Parini, Italy
| |
Collapse
|
9
|
Vahedifard F, Ai HA, Supanich MP, Marathu KK, Liu X, Kocak M, Ansari SM, Akyuz M, Adepoju JO, Adler S, Byrd S. Automatic Ventriculomegaly Detection in Fetal Brain MRI: A Step-by-Step Deep Learning Model for Novel 2D-3D Linear Measurements. Diagnostics (Basel) 2023; 13:2355. [PMID: 37510099 PMCID: PMC10378043 DOI: 10.3390/diagnostics13142355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2023] [Revised: 07/07/2023] [Accepted: 07/09/2023] [Indexed: 07/30/2023] Open
Abstract
In this study, we developed an automated workflow using a deep learning model (DL) to measure the lateral ventricle linearly in fetal brain MRI, which are subsequently classified into normal or ventriculomegaly, defined as a diameter wider than 10 mm at the level of the thalamus and choroid plexus. To accomplish this, we first trained a UNet-based deep learning model to segment the brain of a fetus into seven different tissue categories using a public dataset (FeTA 2022) consisting of fetal T2-weighted images. Then, an automatic workflow was developed to perform lateral ventricle measurement at the level of the thalamus and choroid plexus. The test dataset included 22 cases of normal and abnormal T2-weighted fetal brain MRIs. Measurements performed by our AI model were compared with manual measurements performed by a general radiologist and a neuroradiologist. The AI model correctly classified 95% of fetal brain MRI cases into normal or ventriculomegaly. It could measure the lateral ventricle diameter in 95% of cases with less than a 1.7 mm error. The average difference between measurements was 0.90 mm in AI vs. general radiologists and 0.82 mm in AI vs. neuroradiologists, which are comparable to the difference between the two radiologists, 0.51 mm. In addition, the AI model also enabled the researchers to create 3D-reconstructed images, which better represent real anatomy than 2D images. When a manual measurement is performed, it could also provide both the right and left ventricles in just one cut, instead of two. The measurement difference between the general radiologist and the algorithm (p = 0.9827), and between the neuroradiologist and the algorithm (p = 0.2378), was not statistically significant. In contrast, the difference between general radiologists vs. neuroradiologists was statistically significant (p = 0.0043). To the best of our knowledge, this is the first study that performs 2D linear measurement of ventriculomegaly with a 3D model based on an artificial intelligence approach. The paper presents a step-by-step approach for designing an AI model based on several radiological criteria. Overall, this study showed that AI can automatically calculate the lateral ventricle in fetal brain MRIs and accurately classify them as abnormal or normal.
Collapse
Affiliation(s)
- Farzan Vahedifard
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - H Asher Ai
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Mark P Supanich
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Kranthi K Marathu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Xuchu Liu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Mehmet Kocak
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Shehbaz M Ansari
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Melih Akyuz
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Jubril O Adepoju
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Seth Adler
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| | - Sharon Byrd
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, Rush Medical College, Chicago, IL 60612, USA
| |
Collapse
|
10
|
Vahedifard F, Adepoju JO, Supanich M, Ai HA, Liu X, Kocak M, Marathu KK, Byrd SE. Review of deep learning and artificial intelligence models in fetal brain magnetic resonance imaging. World J Clin Cases 2023; 11:3725-3735. [PMID: 37383127 PMCID: PMC10294149 DOI: 10.12998/wjcc.v11.i16.3725] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 01/30/2023] [Accepted: 05/06/2023] [Indexed: 06/02/2023] Open
Abstract
Central nervous system abnormalities in fetuses are fairly common, happening in 0.1% to 0.2% of live births and in 3% to 6% of stillbirths. So initial detection and categorization of fetal Brain abnormalities are critical. Manually detecting and segmenting fetal brain magnetic resonance imaging (MRI) could be time-consuming, and susceptible to interpreter experience. Artificial intelligence (AI) algorithms and machine learning approaches have a high potential for assisting in the early detection of these problems, improving the diagnosis process and follow-up procedures. The use of AI and machine learning techniques in fetal brain MRI was the subject of this narrative review paper. Using AI, anatomic fetal brain MRI processing has investigated models to predict specific landmarks and segmentation automatically. All gestation age weeks (17-38 wk) and different AI models (mainly Convolutional Neural Network and U-Net) have been used. Some models' accuracy achieved 95% and more. AI could help preprocess and post-process fetal images and reconstruct images. Also, AI can be used for gestational age prediction (with one-week accuracy), fetal brain extraction, fetal brain segmentation, and placenta detection. Some fetal brain linear measurements, such as Cerebral and Bone Biparietal Diameter, have been suggested. Classification of brain pathology was studied using diagonal quadratic discriminates analysis, K-nearest neighbor, random forest, naive Bayes, and radial basis function neural network classifiers. Deep learning methods will become more powerful as more large-scale, labeled datasets become available. Having shared fetal brain MRI datasets is crucial because there aren not many fetal brain pictures available. Also, physicians should be aware of AI's function in fetal brain MRI, particularly neuroradiologists, general radiologists, and perinatologists.
Collapse
Affiliation(s)
- Farzan Vahedifard
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Jubril O Adepoju
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Mark Supanich
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, IL 606012, United States
| | - Hua Asher Ai
- Division for Diagnostic Medical Physics, Department of Radiology and Nuclear Medicine, Rush University Medical Center, Chicago, IL 606012, United States
| | - Xuchu Liu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Mehmet Kocak
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Kranthi K Marathu
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| | - Sharon E Byrd
- Department of Diagnostic Radiology and Nuclear Medicine, Rush Medical College, Chicago, IL 606012, United States
| |
Collapse
|
11
|
Mu N, Lyu Z, Rezaeitaleshmahalleh M, Zhang X, Rasmussen T, McBane R, Jiang J. Automatic segmentation of abdominal aortic aneurysms from CT angiography using a context-aware cascaded U-Net. Comput Biol Med 2023; 158:106569. [PMID: 36989747 PMCID: PMC10625464 DOI: 10.1016/j.compbiomed.2023.106569] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 11/22/2022] [Accepted: 01/22/2023] [Indexed: 01/24/2023]
Abstract
We delineate abdominal aortic aneurysms, including lumen and intraluminal thrombosis (ILT), from contrast-enhanced computed tomography angiography (CTA) data in 70 patients with complete automation. A novel context-aware cascaded U-Net configuration enables automated image segmentation. Notably, auto-context structure, in conjunction with dilated convolutions, anisotropic context module, hierarchical supervision, and a multi-class loss function, are proposed to improve the delineation of ILT in an unbalanced, low-contrast multi-class labeling problem. A quantitative analysis shows that the automated image segmentation produces comparable results with trained human users (e.g., DICE scores of 0.945 and 0.804 for lumen and ILT, respectively). Resultant morphological metrics (e.g., volume, surface area, etc.) are highly correlated to those parameters generated by trained human users. In conclusion, the proposed automated multi-class image segmentation tool has the potential to be further developed as a translational software tool that can be used to improve the clinical management of AAAs.
Collapse
Affiliation(s)
- Nan Mu
- Biomedical Engineering, Michigan Technological University, Houghton, MI, 49931, USA
| | - Zonghan Lyu
- Biomedical Engineering, Michigan Technological University, Houghton, MI, 49931, USA
| | | | | | | | | | - Jingfeng Jiang
- Biomedical Engineering, Michigan Technological University, Houghton, MI, 49931, USA; Center for Biocomputing and Digital Health, Health Research Institute, Institute of Computing and Cybernetics, Michigan Technological University, Houghton, MI, 49931, USA.
| |
Collapse
|
12
|
Tong J, Wang C. A dual tri-path CNN system for brain tumor segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104411] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
13
|
Wang M, Jiang H. Memory-Net: Coupling feature maps extraction and hierarchical feature maps reuse for efficient and effective PET/CT multi-modality image-based tumor segmentation. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
14
|
Zhang R, Jia S, Adamu MJ, Nie W, Li Q, Wu T. HMNet: Hierarchical Multi-Scale Brain Tumor Segmentation Network. J Clin Med 2023; 12:jcm12020538. [PMID: 36675470 PMCID: PMC9861819 DOI: 10.3390/jcm12020538] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Revised: 12/30/2022] [Accepted: 01/04/2023] [Indexed: 01/11/2023] Open
Abstract
An accurate and efficient automatic brain tumor segmentation algorithm is important for clinical practice. In recent years, there has been much interest in automatic segmentation algorithms that use convolutional neural networks. In this paper, we propose a novel hierarchical multi-scale segmentation network (HMNet), which contains a high-resolution branch and parallel multi-resolution branches. The high-resolution branch can keep track of the brain tumor's spatial details, and the multi-resolution feature exchange and fusion allow the network's receptive fields to adapt to brain tumors of different shapes and sizes. In particular, to overcome the large computational overhead caused by expensive 3D convolution, we propose a lightweight conditional channel weighting block to reduce GPU memory and improve the efficiency of HMNet. We also propose a lightweight multi-resolution feature fusion (LMRF) module to further reduce model complexity and reduce the redundancy of the feature maps. We run tests on the BraTS 2020 dataset to determine how well the proposed network would work. The dice similarity coefficients of HMNet for ET, WT, and TC are 0.781, 0.901, and 0.823, respectively. Many comparative experiments on the BraTS 2020 dataset and other two datasets show that our proposed HMNet has achieved satisfactory performance compared with the SOTA approaches.
Collapse
Affiliation(s)
- Ruifeng Zhang
- School of Microelectronicss, Tianjin University, Tianjin 300072, China
| | - Shasha Jia
- School of Microelectronicss, Tianjin University, Tianjin 300072, China
| | | | - Weizhi Nie
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China
- Correspondence: (W.N.); (Q.L.)
| | - Qiang Li
- School of Microelectronicss, Tianjin University, Tianjin 300072, China
- Correspondence: (W.N.); (Q.L.)
| | - Ting Wu
- Department of Cardiopulmonary Bypass, Chest Hospital, Tianjin University, Tianjin 300072, China
| |
Collapse
|
15
|
Radiomics-Based Machine Learning to Predict Recurrence in Glioma Patients Using Magnetic Resonance Imaging. J Comput Assist Tomogr 2023; 47:129-135. [PMID: 36194851 DOI: 10.1097/rct.0000000000001386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Recurrence is a major factor in the poor prognosis of patients with glioma. The aim of this study was to predict glioma recurrence using machine learning based on radiomic features. METHODS We recruited 77 glioma patients, consisting of 57 newly diagnosed patients and 20 patients with recurrence. After extracting the radiomic features from T2-weighted images, the data set was randomly divided into training (58 patients) and testing (19 patients) cohorts. An automated machine learning method (the Tree-based Pipeline Optimization Tool) was applied to generate 10 independent recurrence prediction models. The final model was determined based on the area under the curve (AUC) and average specificity. Moreover, an independent validation set of 20 patients with glioma was used to verify the model performance. RESULTS Recurrence in glioma patients was successfully predicting by machine learning using radiomic features. Among the 10 recurrence prediction models, the best model achieved an accuracy of 0.81, an AUC value of 0.85, and a specificity of 0.69 in the testing cohort, but an accuracy of 0.75 and an AUC value of 0.87 in the independent validation set. CONCLUSIONS Our algorithm that is generated by machine learning exhibits promising power and may predict recurrence noninvasively, thereby offering potential value for the early development of interventions to delay or prevent recurrence in glioma patients.
Collapse
|
16
|
Praveenkumar S, Kalaiselvi T, Somasundaram K. Methods of Brain Extraction from Magnetic Resonance Images of Human Head: A Review. Crit Rev Biomed Eng 2023; 51:1-40. [PMID: 37581349 DOI: 10.1615/critrevbiomedeng.2023047606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Medical images are providing vital information to aid physicians in diagnosing a disease afflicting the organ of a human body. Magnetic resonance imaging is an important imaging modality in capturing the soft tissues of the brain. Segmenting and extracting the brain is essential in studying the structure and pathological condition of brain. There are several methods that are developed for this purpose. Researchers in brain extraction or segmentation need to know the current status of the work that have been done. Such an information is also important for improving the existing method to get more accurate results or to reduce the complexity of the algorithm. In this paper we review the classical methods and convolutional neural network-based deep learning brain extraction methods.
Collapse
Affiliation(s)
| | - T Kalaiselvi
- Department of Computer Science and Applications, Gandhigram Rural Institute, Gandhigram 624302, Tamil Nadu, India
| | | |
Collapse
|
17
|
Yao X, Wang X, Wang SH, Zhang YD. A comprehensive survey on convolutional neural network in medical image analysis. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:41361-41405. [DOI: 10.1007/s11042-020-09634-7] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2020] [Revised: 07/30/2020] [Accepted: 08/13/2020] [Indexed: 08/30/2023]
|
18
|
Zheng P, Zhu X, Guo W. Brain tumour segmentation based on an improved U-Net. BMC Med Imaging 2022; 22:199. [PMCID: PMC9673428 DOI: 10.1186/s12880-022-00931-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 11/08/2022] [Indexed: 11/19/2022] Open
Abstract
Abstract
Background
Automatic segmentation of brain tumours using deep learning algorithms is currently one of the research hotspots in the medical image segmentation field. An improved U-Net network is proposed to segment brain tumours to improve the segmentation effect of brain tumours.
Methods
To solve the problems of other brain tumour segmentation models such as U-Net, including insufficient ability to segment edge details and reuse feature information, poor extraction of location information and the commonly used binary cross-entropy and Dice loss are often ineffective when used as loss functions for brain tumour segmentation models, we propose a serial encoding–decoding structure, which achieves improved segmentation performance by adding hybrid dilated convolution (HDC) modules and concatenation between each module of two serial networks. In addition, we propose a new loss function to focus the model more on samples that are difficult to segment and classify. We compared the results of our proposed model and the commonly used segmentation models under the IOU, PA, Dice, precision, Hausdorf95, and ASD metrics.
Results
The performance of the proposed method outperforms other segmentation models in each metric. In addition, the schematic diagram of the segmentation results shows that the segmentation results of our algorithm are closer to the ground truth, showing more brain tumour details, while the segmentation results of other algorithms are smoother.
Conclusions
Our algorithm has better semantic segmentation performance than other commonly used segmentation algorithms. The technology we propose can be used in the brain tumour diagnosis to provide better protection for patients' later treatments.
Collapse
|
19
|
Munir K, Frezza F, Rizzi A. Deep Learning Hybrid Techniques for Brain Tumor Segmentation. SENSORS (BASEL, SWITZERLAND) 2022; 22:8201. [PMID: 36365900 PMCID: PMC9658353 DOI: 10.3390/s22218201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 10/17/2022] [Accepted: 10/19/2022] [Indexed: 06/16/2023]
Abstract
Medical images play an important role in medical diagnosis and treatment. Oncologists analyze images to determine the different characteristics of deadly diseases, plan the therapy, and observe the evolution of the disease. The objective of this paper is to propose a method for the detection of brain tumors. Brain tumors are identified from Magnetic Resonance (MR) images by performing suitable segmentation procedures. The latest technical literature concerning radiographic images of the brain shows that deep learning methods can be implemented to extract specific features of brain tumors, aiding clinical diagnosis. For this reason, most data scientists and AI researchers work on Machine Learning methods for designing automatic screening procedures. Indeed, an automated method would result in quicker segmentation findings, providing a robust output with respect to possible differences in data sources, mostly due to different procedures in data recording and storing, resulting in a more consistent identification of brain tumors. To improve the performance of the segmentation procedure, new architectures are proposed and tested in this paper. We propose deep neural networks for the detection of brain tumors, trained on the MRI scans of patients' brains. The proposed architectures are based on convolutional neural networks and inception modules for brain tumor segmentation. A comparison of these proposed architectures with the baseline reference ones shows very interesting results. MI-Unet showed a performance increase in comparison to baseline Unet architecture by 7.5% in dice score, 23.91% insensitivity, and 7.09% in specificity. Depth-wise separable MI-Unet showed a performance increase by 10.83% in dice score, 2.97% in sensitivity, and 12.72% in specificity as compared to the baseline Unet architecture. Hybrid Unet architecture achieved performance improvement of 9.71% in dice score, 3.56% in sensitivity, and 12.6% in specificity. Whereas the depth-wise separable hybrid Unet architecture outperformed the baseline architecture by 15.45% in dice score, 20.56% in sensitivity, and 12.22% in specificity.
Collapse
|
20
|
Hoopes A, Mora JS, Dalca AV, Fischl B, Hoffmann M. SynthStrip: skull-stripping for any brain image. Neuroimage 2022; 260:119474. [PMID: 35842095 PMCID: PMC9465771 DOI: 10.1016/j.neuroimage.2022.119474] [Citation(s) in RCA: 53] [Impact Index Per Article: 26.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 06/17/2022] [Accepted: 07/11/2022] [Indexed: 01/18/2023] Open
Abstract
The removal of non-brain signal from magnetic resonance imaging (MRI) data, known as skull-stripping, is an integral component of many neuroimage analysis streams. Despite their abundance, popular classical skull-stripping methods are usually tailored to images with specific acquisition properties, namely near-isotropic resolution and T1-weighted (T1w) MRI contrast, which are prevalent in research settings. As a result, existing tools tend to adapt poorly to other image types, such as stacks of thick slices acquired with fast spin-echo (FSE) MRI that are common in the clinic. While learning-based approaches for brain extraction have gained traction in recent years, these methods face a similar burden, as they are only effective for image types seen during the training procedure. To achieve robust skull-stripping across a landscape of imaging protocols, we introduce SynthStrip, a rapid, learning-based brain-extraction tool. By leveraging anatomical segmentations to generate an entirely synthetic training dataset with anatomies, intensity distributions, and artifacts that far exceed the realistic range of medical images, SynthStrip learns to successfully generalize to a variety of real acquired brain images, removing the need for training data with target contrasts. We demonstrate the efficacy of SynthStrip for a diverse set of image acquisitions and resolutions across subject populations, ranging from newborn to adult. We show substantial improvements in accuracy over popular skull-stripping baselines - all with a single trained model. Our method and labeled evaluation data are available at https://w3id.org/synthstrip.
Collapse
Affiliation(s)
- Andrew Hoopes
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA
| | - Jocelyn S Mora
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA
| | - Adrian V Dalca
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, 25 Shattuck St, Boston, MA, USA; Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, USA
| | - Bruce Fischl
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, 25 Shattuck St, Boston, MA, USA; Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, 32 Vassar St, Cambridge, MA, USA; Harvard-MIT Division of Health Sciences and Technology, 77 Massachusetts Ave, Cambridge, MA, USA
| | - Malte Hoffmann
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13(th) St, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, 25 Shattuck St, Boston, MA, USA.
| |
Collapse
|
21
|
Afzal HMR, Luo S, Ramadan S, Khari M, Chaudhary G, Lechner-Scott J. Prediction of Conversion from CIS to Clinically Definite Multiple Sclerosis Using Convolutional Neural Networks. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:5154896. [PMID: 35872945 PMCID: PMC9307372 DOI: 10.1155/2022/5154896] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Accepted: 06/13/2022] [Indexed: 11/18/2022]
Abstract
Multiple sclerosis (MS) is a chronic neurological disease of the central nervous system (CNS). Early diagnosis of MS is highly desirable as treatments are more effective in preventing MS-related disability when given in the early stages of the disease. The main aim of this research is to predict the occurrence of a second MS-related clinical event, which indicates the conversion of clinically isolated syndrome (CIS) to clinically definite MS (CDMS). In this study, we apply a branch of artificial intelligence known as deep learning and develop a fully automated algorithm primed with convolutional neural network (CNN) that has the ability to learn from MRI scan features. The basic architecture of our algorithm is that of the VGG16 CNN model, but amended such that it can handle MRI DICOM images. A dataset comprised of scans acquired using two different scanners was used for the purposes of verification of the algorithm. A group of 49 patients had volumetric MRI scans taken at onset of the disease and then again one year later using one of the two scanners. In total, this yielded 7360 images which were then used for training, validation, and testing of the algorithm. Initially, these raw images were taken through 4 steps of preprocessing. In order to boost the efficiency of the process, we pretrained our algorithm using the publicly available ADNI dataset used to classify Alzheimer's disease. Finally, we used our preprocessed dataset to train and test the algorithm. Clinical evaluation conducted a year after the first time point revealed that 26 of the 49 patients had converted to CDMS, while the remaining 23 had not. Results of testing showed that our algorithm was able to predict the clinical results with an accuracy of 88.8% and with an area under the curve (AUC) of 91%. A highly accurate algorithm was developed using CNN approach to reliably predict conversion of patients with CIS to CDMS using MRI data from two different scanners.
Collapse
Affiliation(s)
- H. M. Rehan Afzal
- School of Electrical Engineering and Computing, University of Newcastle, Callaghan, NSW 2308, Australia
| | - Suhuai Luo
- School of Electrical Engineering and Computing, University of Newcastle, Callaghan, NSW 2308, Australia
| | - Saadallah Ramadan
- Hunter Medical Research Institute, New Lambton Heights, NSW 2305, Australia
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| | | | | |
Collapse
|
22
|
Moser F, Huang R, Papież BW, Namburete AIL. BEAN: Brain Extraction and Alignment Network for 3D Fetal Neurosonography. Neuroimage 2022; 258:119341. [PMID: 35654376 DOI: 10.1016/j.neuroimage.2022.119341] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 04/08/2022] [Accepted: 05/28/2022] [Indexed: 01/18/2023] Open
Abstract
Brain extraction (masking of extra-cerebral tissues) and alignment are fundamental first steps of most neuroimage analysis pipelines. The lack of automated solutions for 3D ultrasound (US) has therefore limited its potential as a neuroimaging modality for studying fetal brain development using routinely acquired scans. In this work, we propose a convolutional neural network (CNN) that accurately and consistently aligns and extracts the fetal brain from minimally pre-processed 3D US scans. Our multi-task CNN, Brain Extraction and Alignment Network (BEAN), consists of two independent branches: 1) a fully-convolutional encoder-decoder branch for brain extraction of unaligned scans, and 2) a two-step regression-based branch for similarity alignment of the brain to a common coordinate space. BEAN was tested on 356 fetal head 3D scans spanning the gestational range of 14 to 30 weeks, significantly outperforming all current alternatives for fetal brain extraction and alignment. BEAN achieved state-of-the-art performance for both tasks, with a mean Dice Similarity Coefficient (DSC) of 0.94 for the brain extraction masks, and a mean DSC of 0.93 for the alignment of the target brain masks. The presented experimental results show that brain structures such as the thalamus, choroid plexus, cavum septum pellucidum, and Sylvian fissure, are consistently aligned throughout the dataset and remain clearly visible when the scans are averaged together. The BEAN implementation and related code can be found under www.github.com/felipemoser/kelluwen.
Collapse
Affiliation(s)
- Felipe Moser
- Oxford Machine Learning in Neuroimaging laboratory, OMNI, Department of Computer Science, University of Oxford, Oxford, UK.
| | - Ruobing Huang
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK
| | -
- Nuffield Department of Women's and Reproductive Health, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Bartłomiej W Papież
- Institute of Biomedical Engineering, Department of Engineering Science, University of Oxford, Oxford, UK; Big Data Institute, Li Ka Shing Centre for Health Information and Discovery, University of Oxford, Oxford, UK
| | - Ana I L Namburete
- Oxford Machine Learning in Neuroimaging laboratory, OMNI, Department of Computer Science, University of Oxford, Oxford, UK; Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom
| |
Collapse
|
23
|
Chuang KH, Wu PH, Li Z, Fan KH, Weng JC. Deep learning network for integrated coil inhomogeneity correction and brain extraction of mixed MRI data. Sci Rep 2022; 12:8578. [PMID: 35595829 PMCID: PMC9123199 DOI: 10.1038/s41598-022-12587-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2021] [Accepted: 05/13/2022] [Indexed: 12/02/2022] Open
Abstract
Magnetic Resonance Imaging (MRI) has been widely used to acquire structural and functional information about the brain. In a group- or voxel-wise analysis, it is essential to correct the bias field of the radiofrequency coil and to extract the brain for accurate registration to the brain template. Although automatic methods have been developed, manual editing is still required, particularly for echo-planar imaging (EPI) due to its lower spatial resolution and larger geometric distortion. The needs of user interventions slow down data processing and lead to variable results between operators. Deep learning networks have been successfully used for automatic postprocessing. However, most networks are only designed for a specific processing and/or single image contrast (e.g., spin-echo or gradient-echo). This limitation markedly restricts the application and generalization of deep learning tools. To address these limitations, we developed a deep learning network based on the generative adversarial net (GAN) to automatically correct coil inhomogeneity and extract the brain from both spin- and gradient-echo EPI without user intervention. Using various quantitative indices, we show that this method achieved high similarity to the reference target and performed consistently across datasets acquired from rodents. These results highlight the potential of deep networks to integrate different postprocessing methods and adapt to different image contrasts. The use of the same network to process multimodality data would be a critical step toward a fully automatic postprocessing pipeline that could facilitate the analysis of large datasets with high consistency.
Collapse
Affiliation(s)
- Kai-Hsiang Chuang
- Queensland Brain Institute and Centre for Advanced Imaging, University of Queensland, Brisbane, Australia
| | - Pei-Huan Wu
- Department of Medical Imaging and Radiological Sciences, and Graduate Institute of Artificial Intelligence, Chang Gung University, No. 259, Wenhua 1st Rd., Guishan Dist., Taoyuan, 33302, Taiwan
| | - Zengmin Li
- Queensland Brain Institute and Centre for Advanced Imaging, University of Queensland, Brisbane, Australia
| | - Kang-Hsing Fan
- Department of Radiation Oncology, Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan
| | - Jun-Cheng Weng
- Department of Medical Imaging and Radiological Sciences, and Graduate Institute of Artificial Intelligence, Chang Gung University, No. 259, Wenhua 1st Rd., Guishan Dist., Taoyuan, 33302, Taiwan. .,Medical Imaging Research Center, Institute for Radiological Research, Chang Gung University and Chang Gung Memorial Hospital at Linkou, Taoyuan, Taiwan. .,Department of Psychiatry, Chang Gung Memorial Hospital, Chiayi, Taiwan.
| |
Collapse
|
24
|
Wu L, Hu S, Liu C. MR brain segmentation based on DE-ResUnet combining texture features and background knowledge. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103541] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
25
|
Ranjbar S, Singleton KW, Curtin L, Rickertsen CR, Paulson LE, Hu LS, Mitchell JR, Swanson KR. Weakly Supervised Skull Stripping of Magnetic Resonance Imaging of Brain Tumor Patients. FRONTIERS IN NEUROIMAGING 2022; 1:832512. [PMID: 37555156 PMCID: PMC10406204 DOI: 10.3389/fnimg.2022.832512] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 02/21/2022] [Indexed: 08/10/2023]
Abstract
Automatic brain tumor segmentation is particularly challenging on magnetic resonance imaging (MRI) with marked pathologies, such as brain tumors, which usually cause large displacement, abnormal appearance, and deformation of brain tissue. Despite an abundance of previous literature on learning-based methodologies for MRI segmentation, few works have focused on tackling MRI skull stripping of brain tumor patient data. This gap in literature can be associated with the lack of publicly available data (due to concerns about patient identification) and the labor-intensive nature of generating ground truth labels for model training. In this retrospective study, we assessed the performance of Dense-Vnet in skull stripping brain tumor patient MRI trained on our large multi-institutional brain tumor patient dataset. Our data included pretreatment MRI of 668 patients from our in-house institutional review board-approved multi-institutional brain tumor repository. Because of the absence of ground truth, we used imperfect automatically generated training labels using SPM12 software. We trained the network using common MRI sequences in oncology: T1-weighted with gadolinium contrast, T2-weighted fluid-attenuated inversion recovery, or both. We measured model performance against 30 independent brain tumor test cases with available manual brain masks. All images were harmonized for voxel spacing and volumetric dimensions before model training. Model training was performed using the modularly structured deep learning platform NiftyNet that is tailored toward simplifying medical image analysis. Our proposed approach showed the success of a weakly supervised deep learning approach in MRI brain extraction even in the presence of pathology. Our best model achieved an average Dice score, sensitivity, and specificity of, respectively, 94.5, 96.4, and 98.5% on the multi-institutional independent brain tumor test set. To further contextualize our results within existing literature on healthy brain segmentation, we tested the model against healthy subjects from the benchmark LBPA40 dataset. For this dataset, the model achieved an average Dice score, sensitivity, and specificity of 96.2, 96.6, and 99.2%, which are, although comparable to other publications, slightly lower than the performance of models trained on healthy patients. We associate this drop in performance with the use of brain tumor data for model training and its influence on brain appearance.
Collapse
Affiliation(s)
- Sara Ranjbar
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Kyle W. Singleton
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Lee Curtin
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Cassandra R. Rickertsen
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Lisa E. Paulson
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| | - Leland S. Hu
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
- Department of Diagnostic Imaging and Interventional Radiology, Mayo Clinic, Phoenix, AZ, United States
| | - Joseph Ross Mitchell
- Department of Medicine, Faculty of Medicine & Dentistry and the Alberta Machine Intelligence Institute, University of Alberta, Edmonton, AB, Canada
- Provincial Clinical Excellence Portfolio, Alberta Health Services, Edmonton, AB, Canada
| | - Kristin R. Swanson
- Mathematical NeuroOncology Lab, Department of Neurosurgery, Mayo Clinic, Phoenix, AZ, United States
| |
Collapse
|
26
|
Meshaka R, Gaunt T, Shelmerdine SC. Artificial intelligence applied to fetal MRI: A scoping review of current research. Br J Radiol 2022:20211205. [PMID: 35286139 DOI: 10.1259/bjr.20211205] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Artificial intelligence (AI) is defined as the development of computer systems to perform tasks normally requiring human intelligence. A subset of AI, known as machine learning (ML), takes this further by drawing inferences from patterns in data to 'learn' and 'adapt' without explicit instructions meaning that computer systems can 'evolve' and hopefully improve without necessarily requiring external human influences. The potential for this novel technology has resulted in great interest from the medical community regarding how it can be applied in healthcare. Within radiology, the focus has mostly been for applications in oncological imaging, although new roles in other subspecialty fields are slowly emerging.In this scoping review, we performed a literature search of the current state-of-the-art and emerging trends for the use of artificial intelligence as applied to fetal magnetic resonance imaging (MRI). Our search yielded several publications covering AI tools for anatomical organ segmentation, improved imaging sequences and aiding in diagnostic applications such as automated biometric fetal measurements and the detection of congenital and acquired abnormalities. We highlight our own perceived gaps in this literature and suggest future avenues for further research. It is our hope that the information presented highlights the varied ways and potential that novel digital technology could make an impact to future clinical practice with regards to fetal MRI.
Collapse
Affiliation(s)
- Riwa Meshaka
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, UK
| | - Trevor Gaunt
- Department of Radiology, University College London Hospitals NHS Foundation Trust, London, UK
| | - Susan C Shelmerdine
- Department of Clinical Radiology, Great Ormond Street Hospital for Children NHS Foundation Trust, Great Ormond Street, London, UK.,UCL Great Ormond Street Institute of Child Health, Great Ormond Street Hospital for Children, London, UK.,NIHR Great Ormond Street Hospital Biomedical Research Centre, 30 Guilford Street, Bloomsbury, London, UK.,Department of Radiology, St. George's Hospital, Blackshaw Road, London, UK
| |
Collapse
|
27
|
Lima AA, Mridha MF, Das SC, Kabir MM, Islam MR, Watanobe Y. A Comprehensive Survey on the Detection, Classification, and Challenges of Neurological Disorders. BIOLOGY 2022; 11:469. [PMID: 35336842 PMCID: PMC8945195 DOI: 10.3390/biology11030469] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/13/2022] [Revised: 03/12/2022] [Accepted: 03/14/2022] [Indexed: 01/19/2023]
Abstract
Neurological disorders (NDs) are becoming more common, posing a concern to pregnant women, parents, healthy infants, and children. Neurological disorders arise in a wide variety of forms, each with its own set of origins, complications, and results. In recent years, the intricacy of brain functionalities has received a better understanding due to neuroimaging modalities, such as magnetic resonance imaging (MRI), magnetoencephalography (MEG), and positron emission tomography (PET), etc. With high-performance computational tools and various machine learning (ML) and deep learning (DL) methods, these modalities have discovered exciting possibilities for identifying and diagnosing neurological disorders. This study follows a computer-aided diagnosis methodology, leading to an overview of pre-processing and feature extraction techniques. The performance of existing ML and DL approaches for detecting NDs is critically reviewed and compared in this article. A comprehensive portion of this study also shows various modalities and disease-specified datasets that detect and records images, signals, and speeches, etc. Limited related works are also summarized on NDs, as this domain has significantly fewer works focused on disease and detection criteria. Some of the standard evaluation metrics are also presented in this study for better result analysis and comparison. This research has also been outlined in a consistent workflow. At the conclusion, a mandatory discussion section has been included to elaborate on open research challenges and directions for future work in this emerging field.
Collapse
Affiliation(s)
- Aklima Akter Lima
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (A.A.L.); (M.F.M.); (S.C.D.); (M.M.K.)
| | - M. Firoz Mridha
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (A.A.L.); (M.F.M.); (S.C.D.); (M.M.K.)
| | - Sujoy Chandra Das
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (A.A.L.); (M.F.M.); (S.C.D.); (M.M.K.)
| | - Muhammad Mohsin Kabir
- Department of Computer Science and Engineering, Bangladesh University of Business and Technology, Dhaka 1216, Bangladesh; (A.A.L.); (M.F.M.); (S.C.D.); (M.M.K.)
| | - Md. Rashedul Islam
- Department of Computer Science and Engineering, University of Asia Pacific, Dhaka 1216, Bangladesh
| | - Yutaka Watanobe
- Department of Computer Science and Engineering, University of Aizu, Aizu-Wakamatsu 965-8580, Japan;
| |
Collapse
|
28
|
Ning Z, Zhong S, Feng Q, Chen W, Zhang Y. SMU-Net: Saliency-Guided Morphology-Aware U-Net for Breast Lesion Segmentation in Ultrasound Image. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:476-490. [PMID: 34582349 DOI: 10.1109/tmi.2021.3116087] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Deep learning methods, especially convolutional neural networks, have been successfully applied to lesion segmentation in breast ultrasound (BUS) images. However, pattern complexity and intensity similarity between the surrounding tissues (i.e., background) and lesion regions (i.e., foreground) bring challenges for lesion segmentation. Considering that such rich texture information is contained in background, very few methods have tried to explore and exploit background-salient representations for assisting foreground segmentation. Additionally, other characteristics of BUS images, i.e., 1) low-contrast appearance and blurry boundary, and 2) significant shape and position variation of lesions, also increase the difficulty in accurate lesion segmentation. In this paper, we present a saliency-guided morphology-aware U-Net (SMU-Net) for lesion segmentation in BUS images. The SMU-Net is composed of a main network with an additional middle stream and an auxiliary network. Specifically, we first propose generation of saliency maps which incorporate both low-level and high-level image structures, for foreground and background. These saliency maps are then employed to guide the main network and auxiliary network for respectively learning foreground-salient and background-salient representations. Furthermore, we devise an additional middle stream which basically consists of background-assisted fusion, shape-aware, edge-aware and position-aware units. This stream receives the coarse-to-fine representations from the main network and auxiliary network for efficiently fusing the foreground-salient and background-salient features and enhancing the ability of learning morphological information for network. Extensive experiments on five datasets demonstrate higher performance and superior robustness to the scale of dataset than several state-of-the-art deep learning approaches in breast lesion segmentation in ultrasound image.
Collapse
|
29
|
Rutherford S, Sturmfels P, Angstadt M, Hect J, Wiens J, van den Heuvel MI, Scheinost D, Sripada C, Thomason M. Automated Brain Masking of Fetal Functional MRI with Open Data. Neuroinformatics 2022; 20:173-185. [PMID: 34129169 PMCID: PMC9437772 DOI: 10.1007/s12021-021-09528-5] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/18/2021] [Indexed: 01/09/2023]
Abstract
Fetal resting-state functional magnetic resonance imaging (rs-fMRI) has emerged as a critical new approach for characterizing brain development before birth. Despite the rapid and widespread growth of this approach, at present, we lack neuroimaging processing pipelines suited to address the unique challenges inherent in this data type. Here, we solve the most challenging processing step, rapid and accurate isolation of the fetal brain from surrounding tissue across thousands of non-stationary 3D brain volumes. Leveraging our library of 1,241 manually traced fetal fMRI images from 207 fetuses, we trained a Convolutional Neural Network (CNN) that achieved excellent performance across two held-out test sets from separate scanners and populations. Furthermore, we unite the auto-masking model with additional fMRI preprocessing steps from existing software and provide insight into our adaptation of each step. This work represents an initial advancement towards a fully comprehensive, open-source workflow, with openly shared code and data, for fetal functional MRI data preprocessing.
Collapse
Affiliation(s)
- Saige Rutherford
- Donders Institute, Radboud University Medical Center, Nijmegen, The Netherlands.
- Department of Psychiatry, University of Michigan, MI, Ann Arbor, USA.
| | - Pascal Sturmfels
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA
| | - Mike Angstadt
- Department of Psychiatry, University of Michigan, MI, Ann Arbor, USA
| | - Jasmine Hect
- Department of Psychology, Wayne State University, Detroit, MI, USA
| | - Jenna Wiens
- Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI, USA
| | | | - Dustin Scheinost
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
- Department of Statistics and Data Science, Yale University, New Haven, CT, USA
- Child Study Center, Yale School of Medicine, New Haven, CT, USA
| | - Chandra Sripada
- Department of Psychiatry, University of Michigan, MI, Ann Arbor, USA
| | - Moriah Thomason
- Department of Child and Adolescent Psychiatry, New York University School of Medicine, New York, NY, USA
- Department of Population Health, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
30
|
Bae I, Chae JH, Han Y. A brain extraction algorithm for infant T2 weighted magnetic resonance images based on fuzzy c-means thresholding. Sci Rep 2021; 11:23347. [PMID: 34857824 PMCID: PMC8640033 DOI: 10.1038/s41598-021-02722-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 11/22/2021] [Indexed: 11/28/2022] Open
Abstract
It is challenging to extract the brain region from T2-weighted magnetic resonance infant brain images because conventional brain segmentation algorithms are generally optimized for adult brain images, which have different spatial resolution, dynamic changes of imaging intensity, brain size and shape from infant brain images. In this study, we propose a brain extraction algorithm for infant T2-weighted images. The proposed method utilizes histogram partitioning to separate brain regions from the background image. Then, fuzzy c-means thresholding is performed to obtain a rough brain mask for each image slice, followed by refinement steps. For slices that contain eye regions, an additional eye removal algorithm is proposed to eliminate eyes from the brain mask. By using the proposed method, accurate masks for infant T2-weighted brain images can be generated. For validation, we applied the proposed algorithm and conventional methods to T2 infant images (0–24 months of age) acquired with 2D and 3D sequences at 3T MRI. The Dice coefficients and Precision scores, which were calculated as quantitative measures, showed the highest values for the proposed method as follows: For images acquired with a 2D imaging sequence, the average Dice coefficients were 0.9650 ± 0.006 for the proposed method, 0.9262 ± 0.006 for iBEAT, and 0.9490 ± 0.006 for BET. For the data acquired with a 3D imaging sequence, the average Dice coefficient was 0.9746 ± 0.008 for the proposed method, 0.9448 ± 0.004 for iBEAT, and 0.9622 ± 0.01 for BET. The average Precision was 0.9638 ± 0.009 and 0.9565 ± 0.016 for the proposed method, 0.8981 ± 0.01 and 0.8968 ± 0.008 for iBEAT, and 0.9346 ± 0.014 and 0.9282 ± 0.019 for BET for images acquired with 2D and 3D imaging sequences, respectively, demonstrating that the proposed method could be efficiently used for brain extraction in T2-weighted infant images.
Collapse
Affiliation(s)
- Inyoung Bae
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology, Gachon University, Incheon, Republic of Korea
| | - Jong-Hee Chae
- Department of Pediatrics, Seoul National University College of Medicine, Seoul, Republic of Korea
| | - Yeji Han
- Department of Health Sciences and Technology, Gachon Advanced Institute for Health Sciences and Technology, Gachon University, Incheon, Republic of Korea. .,Department of Biomedical Engineering, College of Health Sciences, Gachon University, Incheon, Republic of Korea.
| |
Collapse
|
31
|
Ding Y, Zheng W, Geng J, Qin Z, Choo KKR, Qin Z, Hou X. MVFusFra: A Multi-View Dynamic Fusion Framework for Multimodal Brain Tumor Segmentation. IEEE J Biomed Health Inform 2021; 26:1570-1581. [PMID: 34699375 DOI: 10.1109/jbhi.2021.3122328] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Medical practitioners generally rely on multimodal brain images, for example based on the information from the axial, coronal, and sagittal views, to inform brain tumor diagnosis. Hence, to further utilize the 3D information embedded in such datasets, this paper proposes a multi-view dynamic fusion framework (hereafter, referred to as MVFusFra) to improve the performance of brain tumor segmentation. The proposed framework consists of the following three key building blocks. First, a multi-view deep neural network architecture, which represents multi learning networks for segmenting the brain tumor from different views and each deep neural network corresponds to multi-modal brain images from one single view. Second, the dynamic decision fusion method, which is mainly used to fuse segmentation results from multi-views into an integrated method. Then, two different fusion methods (i.e., voting and weighted averaging) are used to evaluate the fusing process. Third, the multi-view fusion loss (comprising segmentation loss, transition loss, and decision loss) is proposed to facilitate the training process of multi-view learning networks, so as to ensure consistency in appearance and space, for both fusing segmentation results and the training of the learning network. We evaluate the performance of MVFusFra on the BRATS 2015 and BRATS 2018 datasets. Findings from the evaluations suggest that fusion results from multi-views achieve better performance than segmentation results from the single view, and also implying effectiveness of the proposed multi-view fusion loss. A comparative summary also shows that MVFusFra achieves better segmentation performance, in terms of efficiency, in comparison to other competing approaches.
Collapse
|
32
|
Sun YC, Hsieh AT, Fang ST, Wu HM, Kao LW, Chung WY, Chen HH, Liou KD, Lin YS, Guo WY, Lu HHS. Can 3D artificial intelligence models outshine 2D ones in the detection of intracranial metastatic tumors on magnetic resonance images? J Chin Med Assoc 2021; 84:956-962. [PMID: 34613943 DOI: 10.1097/jcma.0000000000000614] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND This study aimed to compare the prediction performance of two-dimensional (2D) and three-dimensional (3D) semantic segmentation models for intracranial metastatic tumors with a volume ≥ 0.3 mL. METHODS We used postcontrast T1 whole-brain magnetic resonance (MR), which was collected from Taipei Veterans General Hospital (TVGH). Also, the study was approved by the institutional review board (IRB) of TVGH. The 2D image segmentation model does not fully use the spatial information between neighboring slices, whereas the 3D segmentation model does. We treated the U-Net as the basic model for 2D and 3D architectures. RESULTS For the prediction of intracranial metastatic tumors, the area under the curve (AUC) of the 3D model was 87.6% and that of the 2D model was 81.5%. CONCLUSION Building a semantic segmentation model based on 3D deep convolutional neural networks might be crucial to achieve a high detection rate in clinical applications for intracranial metastatic tumors.
Collapse
Affiliation(s)
- Ying-Chou Sun
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Ang-Ting Hsieh
- Institute of Data Science and Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
| | - Ssu-Ting Fang
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
| | - Hsiu-Mei Wu
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Liang-Wei Kao
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
| | - Wen-Yuh Chung
- Division of Functional Neurosurgery, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Institute of Neurological, Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Hung-Hsun Chen
- Center of Teaching and Learning Development, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
| | - Kang-Du Liou
- Division of Functional Neurosurgery, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- Institute of Neurological, Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Yu-Shiou Lin
- Institute of Data Science and Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
| | - Wan-Yuo Guo
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan, ROC
| | - Henry Horng-Shing Lu
- Institute of Data Science and Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
- Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu, Taiwan, ROC
- Department of Medical Research, Taipei Veterans General Hospital, Taipei, Taiwan, ROC
| |
Collapse
|
33
|
Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images. J Imaging 2021; 7:jimaging7100200. [PMID: 34677286 PMCID: PMC8536962 DOI: 10.3390/jimaging7100200] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Revised: 09/14/2021] [Accepted: 09/26/2021] [Indexed: 11/16/2022] Open
Abstract
In this work, we develop the Single-Input Multi-Output U-Net (SIMOU-Net), a hybrid network for foetal brain segmentation inspired by the original U-Net fused with the holistically nested edge detection (HED) network. The SIMOU-Net is similar to the original U-Net but it has a deeper architecture and takes account of the features extracted from each side output. It acts similar to an ensemble neural network, however, instead of averaging the outputs from several independently trained models, which is computationally expensive, our approach combines outputs from a single network to reduce the variance of predications and generalization errors. Experimental results using 200 normal foetal brains consisting of over 11,500 2D images produced Dice and Jaccard coefficients of 94.2 ± 5.9% and 88.7 ± 6.9%, respectively. We further tested the proposed network on 54 abnormal cases (over 3500 images) and achieved Dice and Jaccard coefficients of 91.2 ± 6.8% and 85.7 ± 6.6%, respectively.
Collapse
|
34
|
Wu H, Chen X, Li P, Wen Z. Automatic Symmetry Detection From Brain MRI Based on a 2-Channel Convolutional Neural Network. IEEE TRANSACTIONS ON CYBERNETICS 2021; 51:4464-4475. [PMID: 31794419 DOI: 10.1109/tcyb.2019.2952937] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Symmetry detection is a method to extract the ideal mid-sagittal plane (MSP) from brain magnetic resonance (MR) images, which can significantly improve the diagnostic accuracy of brain diseases. In this article, we propose an automatic symmetry detection method for brain MR images in 2-D slices based on a 2-channel convolutional neural network (CNN). Different from the existing detection methods that mainly rely on the local image features (gradient, edge, etc.) to determine the MSP, we use a CNN-based model to implement the brain symmetry detection, which does not require any local feature detections and feature matchings. By training to learn a wide variety of benchmarks in the brain images, we can further use a 2-channel CNN to evaluate the similarity between the pairs of brain patches, which are randomly extracted from the whole brain slice based on a Poisson sampling. Finally, a scoring and ranking scheme is used to identify the optimal symmetry axis for each input brain MR slice. Our method was evaluated in 2166 artificial synthesized brain images and 3064 collected in vivo MR images, which included both healthy and pathological cases. The experimental results display that our method achieves excellent performance for symmetry detection. Comparisons with the state-of-the-art methods also demonstrate the effectiveness and advantages for our approach in achieving higher accuracy than the previous competitors.
Collapse
|
35
|
Zhang W, Wu Y, Yang B, Hu S, Wu L, Dhelimd S. Overview of Multi-Modal Brain Tumor MR Image Segmentation. Healthcare (Basel) 2021; 9:1051. [PMID: 34442188 PMCID: PMC8392341 DOI: 10.3390/healthcare9081051] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 08/08/2021] [Accepted: 08/10/2021] [Indexed: 11/17/2022] Open
Abstract
The precise segmentation of brain tumor images is a vital step towards accurate diagnosis and effective treatment of brain tumors. Magnetic Resonance Imaging (MRI) can generate brain images without tissue damage or skull artifacts, providing important discriminant information for clinicians in the study of brain tumors and other brain diseases. In this paper, we survey the field of brain tumor MRI images segmentation. Firstly, we present the commonly used databases. Then, we summarize multi-modal brain tumor MRI image segmentation methods, which are divided into three categories: conventional segmentation methods, segmentation methods based on classical machine learning methods, and segmentation methods based on deep learning methods. The principles, structures, advantages and disadvantages of typical algorithms in each method are summarized. Finally, we analyze the challenges, and suggest a prospect for future development trends.
Collapse
Affiliation(s)
- Wenyin Zhang
- School of Information Science and Engineering, Linyi University, Linyi 276000, China; (W.Z.); (S.H.)
| | - Yong Wu
- School of Information Science and Engineering, Linyi University, Linyi 276000, China; (W.Z.); (S.H.)
| | - Bo Yang
- Shandong Provincial Key Laboratory of Network Based Intelligent Computing, Jinan 250022, China;
| | - Shunbo Hu
- School of Information Science and Engineering, Linyi University, Linyi 276000, China; (W.Z.); (S.H.)
| | - Liang Wu
- School of Control Science and Engineering, Shandong University, Jinan 250061, China;
| | - Sahraoui Dhelimd
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China;
| |
Collapse
|
36
|
Wang M, Jiang H, Shi T, Yao YD. HD-RDS-UNet: Leveraging Spatial-Temporal Correlation between the Decoder Feature Maps for Lymphoma Segmentation. IEEE J Biomed Health Inform 2021; 26:1116-1127. [PMID: 34351864 DOI: 10.1109/jbhi.2021.3102612] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Lymphoma is a group of malignant tumors originated in the lymphatic system. Automatic and accurate lymphoma segmentation in PET/CT volumes is critical yet challenging in the clinical practice. Recently, UNet-like architectures are widely used for medical image segmentation. The pure UNet-like architectures model the spatial correlation between the feature maps very well, whereas they discard the critical temporal correlation. Some prior work combines UNet with recurrent neural networks (RNNs) to utilize the spatial and temporal correlation simultaneously. However, it is inconvenient to incorporate some advanced techniques for UNet to RNNs, which hampers their further improvements. In this paper, we propose a recurrent dense siamese decoder architecture, which simulates RNNs and can densely utilize the spatial-temporal correlation between the decoder feature maps following a UNet approach. We combine it with a modified hyper dense encoder. Therefore, the proposed model is a UNet with a hyper dense encoder and a recurrent dense siamese decoder (HD-RDS-UNet). To stabilize the training process, we propose a weighted Dice loss with stable gradient and self-adaptive parameters. We perform patient-independent fivefold cross-validation on 3D volumes collected from whole-body PET/CT scans of patients with lymphomas. The experimental results show that the volume-wise average Dice score and sensitivity are 85.58% and 94.63%, respectively. The patient-wise average Dice score and sensitivity are 85.85% and 95.01%, respectively. The different configurations of HD-RDS-UNet consistently show superiority in the performance comparison. Besides, a trained HD-RDS-UNet can be easily pruned, resulting in significantly reduced inference time and memory usage, while keeping very good segmentation performance.
Collapse
|
37
|
Su YH, Jiang W, Chitrakar D, Huang K, Peng H, Hannaford B. Local Style Preservation in Improved GAN-Driven Synthetic Image Generation for Endoscopic Tool Segmentation. SENSORS (BASEL, SWITZERLAND) 2021; 21:5163. [PMID: 34372398 PMCID: PMC8346972 DOI: 10.3390/s21155163] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 07/25/2021] [Accepted: 07/27/2021] [Indexed: 12/19/2022]
Abstract
Accurate semantic image segmentation from medical imaging can enable intelligent vision-based assistance in robot-assisted minimally invasive surgery. The human body and surgical procedures are highly dynamic. While machine-vision presents a promising approach, sufficiently large training image sets for robust performance are either costly or unavailable. This work examines three novel generative adversarial network (GAN) methods of providing usable synthetic tool images using only surgical background images and a few real tool images. The best of these three novel approaches generates realistic tool textures while preserving local background content by incorporating both a style preservation and a content loss component into the proposed multi-level loss function. The approach is quantitatively evaluated, and results suggest that the synthetically generated training tool images enhance UNet tool segmentation performance. More specifically, with a random set of 100 cadaver and live endoscopic images from the University of Washington Sinus Dataset, the UNet trained with synthetically generated images using the presented method resulted in 35.7% and 30.6% improvement over using purely real images in mean Dice coefficient and Intersection over Union scores, respectively. This study is promising towards the use of more widely available and routine screening endoscopy to preoperatively generate synthetic training tool images for intraoperative UNet tool segmentation.
Collapse
Affiliation(s)
- Yun-Hsuan Su
- Department of Computer Science, Mount Holyoke College, 50 College Street, South Hadley, MA 01075, USA;
| | - Wenfan Jiang
- Department of Computer Science, Mount Holyoke College, 50 College Street, South Hadley, MA 01075, USA;
| | - Digesh Chitrakar
- Department of Engineering, Trinity College, 300 Summit St., Hartford, CT 06106, USA; (D.C.); (K.H.)
| | - Kevin Huang
- Department of Engineering, Trinity College, 300 Summit St., Hartford, CT 06106, USA; (D.C.); (K.H.)
| | - Haonan Peng
- Department of Electrical and Computer Engineering, University of Washington, 185 Stevens Way, Paul Allen Center, Seattle, WA 98105, USA; (H.P.); (B.H.)
| | - Blake Hannaford
- Department of Electrical and Computer Engineering, University of Washington, 185 Stevens Way, Paul Allen Center, Seattle, WA 98105, USA; (H.P.); (B.H.)
| |
Collapse
|
38
|
Liu Z, Tong L, Chen L, Zhou F, Jiang Z, Zhang Q, Wang Y, Shan C, Li L, Zhou H. CANet: Context Aware Network for Brain Glioma Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1763-1777. [PMID: 33720830 DOI: 10.1109/tmi.2021.3065918] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Automated segmentation of brain glioma plays an active role in diagnosis decision, progression monitoring and surgery planning. Based on deep neural networks, previous studies have shown promising technologies for brain glioma segmentation. However, these approaches lack powerful strategies to incorporate contextual information of tumor cells and their surrounding, which has been proven as a fundamental cue to deal with local ambiguity. In this work, we propose a novel approach named Context-Aware Network (CANet) for brain glioma segmentation. CANet captures high dimensional and discriminative features with contexts from both the convolutional space and feature interaction graphs. We further propose context guided attentive conditional random fields which can selectively aggregate features. We evaluate our method using publicly accessible brain glioma segmentation datasets BRATS2017, BRATS2018 and BRATS2019. The experimental results show that the proposed algorithm has better or competitive performance against several State-of-The-Art approaches under different segmentation metrics on the training and validation sets.
Collapse
|
39
|
Zhang Z, Li J, Tian C, Zhong Z, Jiao Z, Gao X. Quality-driven deep active learning method for 3D brain MRI segmentation. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.050] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
40
|
Fletcher E, DeCarli C, Fan AP, Knaack A. Convolutional Neural Net Learning Can Achieve Production-Level Brain Segmentation in Structural Magnetic Resonance Imaging. Front Neurosci 2021; 15:683426. [PMID: 34234642 PMCID: PMC8255694 DOI: 10.3389/fnins.2021.683426] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Accepted: 05/27/2021] [Indexed: 01/18/2023] Open
Abstract
Deep learning implementations using convolutional neural nets have recently demonstrated promise in many areas of medical imaging. In this article we lay out the methods by which we have achieved consistently high quality, high throughput computation of intra-cranial segmentation from whole head magnetic resonance images, an essential but typically time-consuming bottleneck for brain image analysis. We refer to this output as “production-level” because it is suitable for routine use in processing pipelines. Training and testing with an extremely large archive of structural images, our segmentation algorithm performs uniformly well over a wide variety of separate national imaging cohorts, giving Dice metric scores exceeding those of other recent deep learning brain extractions. We describe the components involved to achieve this performance, including size, variety and quality of ground truth, and appropriate neural net architecture. We demonstrate the crucial role of appropriately large and varied datasets, suggesting a less prominent role for algorithm development beyond a threshold of capability.
Collapse
Affiliation(s)
- Evan Fletcher
- Department of Neurology, University of California, Davis, Davis, CA, United States
| | - Charles DeCarli
- Department of Neurology, University of California, Davis, Davis, CA, United States
| | - Audrey P Fan
- Department of Neurology, University of California, Davis, Davis, CA, United States.,Department of Biomedical Engineering, University of California, Davis, Davis, CA, United States
| | - Alexander Knaack
- Department of Neurology, University of California, Davis, Davis, CA, United States
| |
Collapse
|
41
|
Guo Y, Duan X, Wang C, Guo H. Segmentation and recognition of breast ultrasound images based on an expanded U-Net. PLoS One 2021; 16:e0253202. [PMID: 34129619 PMCID: PMC8205136 DOI: 10.1371/journal.pone.0253202] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Accepted: 05/30/2021] [Indexed: 01/09/2023] Open
Abstract
This paper establishes a fully automatic real-time image segmentation and recognition system for breast ultrasound intervention robots. It adopts the basic architecture of a U-shaped convolutional network (U-Net), analyses the actual application scenarios of semantic segmentation of breast ultrasound images, and adds dropout layers to the U-Net architecture to reduce the redundancy in texture details and prevent overfitting. The main innovation of this paper is proposing an expanded training approach to obtain an expanded of U-Net. The output map of the expanded U-Net can retain texture details and edge features of breast tumours. Using the grey-level probability labels to train the U-Net is faster than using ordinary labels. The average Dice coefficient (standard deviation) and the average IOU coefficient (standard deviation) are 90.5% (±0.02) and 82.7% (±0.02), respectively, when using the expanded training approach. The Dice coefficient of the expanded U-Net is 7.6 larger than that of a general U-Net, and the IOU coefficient of the expanded U-Net is 11 larger than that of the general U-Net. The context of breast ultrasound images can be extracted, and texture details and edge features of tumours can be retained by the expanded U-Net. Using an expanded U-Net can quickly and automatically achieve precise segmentation and multi-class recognition of breast ultrasound images.
Collapse
Affiliation(s)
- Yanjun Guo
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
- * E-mail:
| | - Xingguang Duan
- School of Mechatronical Engineering, Beijing Institute of Technology, Beijing, China
| | - Chengyi Wang
- Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing, China
| | - Huiqin Guo
- UItrasonic Diagnosis Department, Chengcheng County Hospital, Weinan, Shanxi province, China
| |
Collapse
|
42
|
Vasung L, Rollins CK, Yun HJ, Velasco-Annis C, Zhang J, Wagstyl K, Evans A, Warfield SK, Feldman HA, Grant PE, Gholipour A. Quantitative In vivo MRI Assessment of Structural Asymmetries and Sexual Dimorphism of Transient Fetal Compartments in the Human Brain. Cereb Cortex 2021; 30:1752-1767. [PMID: 31602456 DOI: 10.1093/cercor/bhz200] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 08/02/2019] [Accepted: 08/05/2019] [Indexed: 12/19/2022] Open
Abstract
Structural asymmetries and sexual dimorphism of the human cerebral cortex have been identified in newborns, infants, children, adolescents, and adults. Some of these findings were linked with cognitive and neuropsychiatric disorders, which have roots in altered prenatal brain development. However, little is known about structural asymmetries or sexual dimorphism of transient fetal compartments that arise in utero. Thus, we aimed to identify structural asymmetries and sexual dimorphism in the volume of transient fetal compartments (cortical plate [CP] and subplate [SP]) across 22 regions. For this purpose, we used in vivo structural T2-weighted MRIs of 42 healthy fetuses (16.43-36.86 gestational weeks old, 15 females). We found significant leftward asymmetry in the volume of the CP and SP in the inferior frontal gyrus. The orbitofrontal cortex showed significant rightward asymmetry in the volume of CP merged with SP. Males had significantly larger volumes in regions belonging to limbic, occipital, and frontal lobes, which were driven by a significantly larger SP. Lastly, we did not observe sexual dimorphism in the growth trajectories of the CP or SP. In conclusion, these results support the hypothesis that structural asymmetries and sexual dimorphism in relative volumes of cortical regions are present during prenatal brain development.
Collapse
Affiliation(s)
- Lana Vasung
- Fetal-Neonatal Neuroimaging & Developmental Science Center (FNNDSC), Boston, MA 02115, USA.,Division of Newborn Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Caitlin K Rollins
- Computational Radiology Laboratory, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA.,Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Hyuk Jin Yun
- Fetal-Neonatal Neuroimaging & Developmental Science Center (FNNDSC), Boston, MA 02115, USA.,Division of Newborn Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Clemente Velasco-Annis
- Computational Radiology Laboratory, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Jennings Zhang
- Fetal-Neonatal Neuroimaging & Developmental Science Center (FNNDSC), Boston, MA 02115, USA.,McGill Centre for Integrative Neuroscience/Montreal Neurological Institute, McGill University, Montreal QC H3A 2B4, Canada
| | | | - Alan Evans
- McGill Centre for Integrative Neuroscience/Montreal Neurological Institute, McGill University, Montreal QC H3A 2B4, Canada
| | - Simon K Warfield
- Computational Radiology Laboratory, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Henry A Feldman
- Division of Newborn Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA.,Institutional Centers for Clinical and Translational Research, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - P Ellen Grant
- Fetal-Neonatal Neuroimaging & Developmental Science Center (FNNDSC), Boston, MA 02115, USA.,Division of Newborn Medicine, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| | - Ali Gholipour
- Computational Radiology Laboratory, Boston Children's Hospital, Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
43
|
Li H, Yan G, Luo W, Liu T, Wang Y, Liu R, Zheng W, Zhang Y, Li K, Zhao L, Limperopoulos C, Zou Y, Wu D. Mapping fetal brain development based on automated segmentation and 4D brain atlasing. Brain Struct Funct 2021; 226:1961-1972. [PMID: 34050792 DOI: 10.1007/s00429-021-02303-x] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Accepted: 05/19/2021] [Indexed: 12/30/2022]
Abstract
Fetal brain MRI has become an important tool for in utero assessment of brain development and disorders. However, quantitative analysis of fetal brain MRI remains difficult, partially due to the limited tools for automated preprocessing and the lack of normative brain templates. In this paper, we proposed an automated pipeline for fetal brain extraction, super-resolution reconstruction, and fetal brain atlasing to quantitatively map in utero fetal brain development during mid-to-late gestation in a Chinese population. First, we designed a U-net convolutional neural network for automated fetal brain extraction, which achieved an average accuracy of 97%. We then generated a developing fetal brain atlas, using an iterative linear and nonlinear registration approach. Based on the 4D spatiotemporal atlas, we quantified the morphological development of the fetal brain between 23 and 36 weeks of gestation. The proposed pipeline enabled the fully automated volumetric reconstruction for clinically available fetal brain MRI data, and the 4D fetal brain atlas provided normative templates for the quantitative characterization of fetal brain development, especially in the Chinese population.
Collapse
Affiliation(s)
- Haotian Li
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Guohui Yan
- Department of Radiology, School of Medicine, Women's Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Wanrong Luo
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Tingting Liu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Yan Wang
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Ruibin Liu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Weihao Zheng
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Yi Zhang
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China.,Department of Neurology, The First Affiliated Hospital, Zhejiang University, Hangzhou, China
| | - Kui Li
- Department of Radiology, School of Medicine, Women's Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Li Zhao
- Center for the Developing Brain, Diagnostic Imaging and Radiology, Children's National Medical Center, Washington, DC, USA
| | - Catherine Limperopoulos
- Center for the Developing Brain, Diagnostic Imaging and Radiology, Children's National Medical Center, Washington, DC, USA
| | - Yu Zou
- Department of Radiology, School of Medicine, Women's Hospital, Zhejiang University, Hangzhou, Zhejiang, China
| | - Dan Wu
- Key Laboratory for Biomedical Engineering of Ministry of Education, Department of Biomedical Engineering, College of Biomedical Engineering & Instrument Science, Zhejiang University, Hangzhou, Zhejiang, China.
| |
Collapse
|
44
|
Conze PH, Kavur AE, Cornec-Le Gall E, Gezer NS, Le Meur Y, Selver MA, Rousseau F. Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks. Artif Intell Med 2021; 117:102109. [PMID: 34127239 DOI: 10.1016/j.artmed.2021.102109] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 01/24/2021] [Accepted: 05/06/2021] [Indexed: 02/05/2023]
Abstract
Abdominal anatomy segmentation is crucial for numerous applications from computer-assisted diagnosis to image-guided surgery. In this context, we address fully-automated multi-organ segmentation from abdominal CT and MR images using deep learning. The proposed model extends standard conditional generative adversarial networks. Additionally to the discriminator which enforces the model to create realistic organ delineations, it embeds cascaded partially pre-trained convolutional encoder-decoders as generator. Encoder fine-tuning from a large amount of non-medical images alleviates data scarcity limitations. The network is trained end-to-end to benefit from simultaneous multi-level segmentation refinements using auto-context. Employed for healthy liver, kidneys and spleen segmentation, our pipeline provides promising results by outperforming state-of-the-art encoder-decoder schemes. Followed for the Combined Healthy Abdominal Organ Segmentation (CHAOS) challenge organized in conjunction with the IEEE International Symposium on Biomedical Imaging 2019, it gave us the first rank for three competition categories: liver CT, liver MR and multi-organ MR segmentation. Combining cascaded convolutional and adversarial networks strengthens the ability of deep learning pipelines to automatically delineate multiple abdominal organs, with good generalization capability. The comprehensive evaluation provided suggests that better guidance could be achieved to help clinicians in abdominal image interpretation and clinical decision making.
Collapse
Affiliation(s)
- Pierre-Henri Conze
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France.
| | - Ali Emre Kavur
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Emilie Cornec-Le Gall
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; UMR 1078, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| | - Naciye Sinem Gezer
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey; Department of Radiology, Faculty of Medicine, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - Yannick Le Meur
- Department of Nephrology, University Hospital, 2 avenue Foch, 29609 Brest, France; LBAI UMR 1227, Inserm, 5 avenue Foch, 29609 Brest, France
| | - M Alper Selver
- Dokuz Eylul University, Cumhuriyet Bulvarı, 35210 Izmir, Turkey
| | - François Rousseau
- IMT Atlantique, Technopôle Brest-Iroise, 29238 Brest, France; LaTIM UMR 1101, Inserm, 22 avenue Camille Desmoulins, 29238 Brest, France
| |
Collapse
|
45
|
Vasung L, Zhao C, Barkovich M, Rollins CK, Zhang J, Lepage C, Corcoran T, Velasco-Annis C, Yun HJ, Im K, Warfield SK, Evans AC, Huang H, Gholipour A, Grant PE. Association between Quantitative MR Markers of Cortical Evolving Organization and Gene Expression during Human Prenatal Brain Development. Cereb Cortex 2021; 31:3610-3621. [PMID: 33836056 PMCID: PMC8258434 DOI: 10.1093/cercor/bhab035] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 02/03/2021] [Accepted: 02/04/2021] [Indexed: 11/13/2022] Open
Abstract
The relationship between structural changes of the cerebral cortex revealed by Magnetic Resonance Imaging (MRI) and gene expression in the human fetal brain has not been explored. In this study, we aimed to test the hypothesis that relative regional thickness (a measure of cortical evolving organization) of fetal cortical compartments (cortical plate [CP] and subplate [SP]) is associated with expression levels of genes with known cortical phenotype. Mean regional SP/CP thickness ratios across age measured on in utero MRI of 25 healthy fetuses (20-33 gestational weeks [GWs]) were correlated with publicly available regional gene expression levels (23-24 GW fetuses). Larger SP/CP thickness ratios (more pronounced cortical evolving organization) was found in perisylvian regions. Furthermore, we found a significant association between SP/CP thickness ratio and expression levels of the FLNA gene (mutated in periventricular heterotopia, congenital heart disease, and vascular malformations). Further work is needed to identify early MRI biomarkers of gene expression that lead to abnormal cortical development.
Collapse
Affiliation(s)
- Lana Vasung
- The Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, MA 02115, USA.,Division of Newborn Medicine, Boston Children's Hospital, Boston, MA 02115, USA.,Department of Pediatrics, Harvard Medical School, Boston, MA 02115, USA.,Intelligent Medical Imaging Research Group, Boston Children's Hospital, Boston, MA 02115, USA
| | - Chenying Zhao
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA 19104, USA.,Department of Bioengineering, School of Engineering and Applied Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Matthew Barkovich
- Department of Radiology, UCSF Benioff Children's Hospital, San Francisco, CA 94158, USA.,Department of Radiology & Biomedical Imaging, University of California, San Francisco, CA 94115, USA
| | - Caitlin K Rollins
- Intelligent Medical Imaging Research Group, Boston Children's Hospital, Boston, MA 02115, USA.,Department of Neurology, Boston Children's Hospital, Boston, MA 02115, USA.,Department of Neurology, Harvard Medical School, Boston, MA 02115, USA
| | - Jennings Zhang
- The Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, MA 02115, USA.,Division of Newborn Medicine, Boston Children's Hospital, Boston, MA 02115, USA
| | - Claude Lepage
- ACELab, McGill Centre for Integrative Neuroscience, McGill University, Montreal, QC H3A 2B4, Canada
| | - Teddy Corcoran
- Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Clemente Velasco-Annis
- Intelligent Medical Imaging Research Group, Boston Children's Hospital, Boston, MA 02115, USA.,Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA 02115, USA.,Department of Radiology, Boston Children's Hospital; and Harvard Medical School, Boston, MA 02115, USA
| | - Hyuk Jin Yun
- The Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, MA 02115, USA.,Division of Newborn Medicine, Boston Children's Hospital, Boston, MA 02115, USA.,Department of Pediatrics, Harvard Medical School, Boston, MA 02115, USA
| | - Kiho Im
- The Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, MA 02115, USA.,Division of Newborn Medicine, Boston Children's Hospital, Boston, MA 02115, USA.,Department of Pediatrics, Harvard Medical School, Boston, MA 02115, USA
| | - Simon Keith Warfield
- Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA 02115, USA.,Department of Radiology, Boston Children's Hospital; and Harvard Medical School, Boston, MA 02115, USA
| | - Alan Charles Evans
- ACELab, McGill Centre for Integrative Neuroscience, McGill University, Montreal, QC H3A 2B4, Canada
| | - Hao Huang
- Department of Radiology, Children's Hospital of Philadelphia, Philadelphia, PA 19104, USA.,Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Ali Gholipour
- Intelligent Medical Imaging Research Group, Boston Children's Hospital, Boston, MA 02115, USA.,Computational Radiology Laboratory, Boston Children's Hospital, Boston, MA 02115, USA.,Department of Radiology, Boston Children's Hospital; and Harvard Medical School, Boston, MA 02115, USA
| | - Patricia Ellen Grant
- The Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, Boston, MA 02115, USA.,Division of Newborn Medicine, Boston Children's Hospital, Boston, MA 02115, USA.,Department of Pediatrics, Harvard Medical School, Boston, MA 02115, USA.,Department of Radiology, Boston Children's Hospital; and Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
46
|
Lyu I, Bao S, Hao L, Yao J, Miller JA, Voorhies W, Taylor WD, Bunge SA, Weiner KS, Landman BA. Labeling lateral prefrontal sulci using spherical data augmentation and context-aware training. Neuroimage 2021; 229:117758. [PMID: 33497773 PMCID: PMC8366030 DOI: 10.1016/j.neuroimage.2021.117758] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 12/18/2020] [Accepted: 01/07/2021] [Indexed: 02/06/2023] Open
Abstract
The inference of cortical sulcal labels often focuses on deep (primary and secondary) sulcal regions, whereas shallow (tertiary) sulcal regions are largely overlooked in the literature due to the scarcity of manual/well-defined annotations and their large neuroanatomical variability. In this paper, we present an automated framework for regional labeling of both primary/secondary and tertiary sulci of the dorsal portion of lateral prefrontal cortex (LPFC) using spherical convolutional neural networks. We propose two core components that enhance the inference of sulcal labels to overcome such large neuroanatomical variability: (1) surface data augmentation and (2) context-aware training. (1) To take into account neuroanatomical variability, we synthesize training data from the proposed feature space that embeds intermediate deformation trajectories of spherical data in a rigid to non-rigid fashion, which bridges an augmentation gap in conventional rotation data augmentation. (2) Moreover, we design a two-stage training process to improve labeling accuracy of tertiary sulci by informing the biological associations in neuroanatomy: inference of primary/secondary sulci and then their spatial likelihood to guide the definition of tertiary sulci. In the experiments, we evaluate our method on 13 deep and shallow sulci of human LPFC in two independent data sets with different age ranges: pediatric (N=60) and adult (N=36) cohorts. We compare the proposed method with a conventional multi-atlas approach and spherical convolutional neural networks without/with rotation data augmentation. In both cohorts, the proposed data augmentation improves labeling accuracy of deep and shallow sulci over the baselines, and the proposed context-aware training offers further improvement in the labeling of shallow sulci over the proposed data augmentation. We share our tools with the field and discuss applications of our results for understanding neuroanatomical-functional organization of LPFC and the rest of cortex (https://github.com/ilwoolyu/SphericalLabeling).
Collapse
Affiliation(s)
- Ilwoo Lyu
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville TN, 37235 USA.
| | - Shuxing Bao
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville TN, 37235 USA
| | - Lingyan Hao
- Institute for Computational & Mathematical Engineering, Stanford University, Stanford, CA 94305, USA
| | - Jewelia Yao
- Department of Psychology, The University of California, Berkeley, CA 94720, USA
| | - Jacob A Miller
- Helen Wills Neuroscience Institute, The University of California, Berkeley, CA 94720, USA
| | - Willa Voorhies
- Department of Psychology, The University of California, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, The University of California, Berkeley, CA 94720, USA
| | - Warren D Taylor
- Psychiatry & Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN 37203 USA
| | - Silvia A Bunge
- Department of Psychology, The University of California, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, The University of California, Berkeley, CA 94720, USA
| | - Kevin S Weiner
- Department of Psychology, The University of California, Berkeley, CA 94720, USA; Helen Wills Neuroscience Institute, The University of California, Berkeley, CA 94720, USA
| | - Bennett A Landman
- Electrical Engineering and Computer Science, Vanderbilt University, Nashville TN, 37235 USA
| |
Collapse
|
47
|
Dou H, Karimi D, Rollins CK, Ortinau CM, Vasung L, Velasco-Annis C, Ouaalam A, Yang X, Ni D, Gholipour A. A Deep Attentive Convolutional Neural Network for Automatic Cortical Plate Segmentation in Fetal MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1123-1133. [PMID: 33351755 PMCID: PMC8016740 DOI: 10.1109/tmi.2020.3046579] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Fetal cortical plate segmentation is essential in quantitative analysis of fetal brain maturation and cortical folding. Manual segmentation of the cortical plate, or manual refinement of automatic segmentations is tedious and time-consuming. Automatic segmentation of the cortical plate, on the other hand, is challenged by the relatively low resolution of the reconstructed fetal brain MRI scans compared to the thin structure of the cortical plate, partial voluming, and the wide range of variations in the morphology of the cortical plate as the brain matures during gestation. To reduce the burden of manual refinement of segmentations, we have developed a new and powerful deep learning segmentation method. Our method exploits new deep attentive modules with mixed kernel convolutions within a fully convolutional neural network architecture that utilizes deep supervision and residual connections. We evaluated our method quantitatively based on several performance measures and expert evaluations. Results show that our method outperforms several state-of-the-art deep models for segmentation, as well as a state-of-the-art multi-atlas segmentation technique. We achieved average Dice similarity coefficient of 0.87, average Hausdorff distance of 0.96 mm, and average symmetric surface difference of 0.28 mm on reconstructed fetal brain MRI scans of fetuses scanned in the gestational age range of 16 to 39 weeks (28.6± 5.3). With a computation time of less than 1 minute per fetal brain, our method can facilitate and accelerate large-scale studies on normal and altered fetal brain cortical maturation and folding.
Collapse
|
48
|
Hussain R, Lalande A, Girum KB, Guigou C, Bozorg Grayeli A. Automatic segmentation of inner ear on CT-scan using auto-context convolutional neural network. Sci Rep 2021; 11:4406. [PMID: 33623074 PMCID: PMC7902630 DOI: 10.1038/s41598-021-83955-x] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2020] [Accepted: 02/10/2021] [Indexed: 01/22/2023] Open
Abstract
Temporal bone CT-scan is a prerequisite in most surgical procedures concerning the ear such as cochlear implants. The 3D vision of inner ear structures is crucial for diagnostic and surgical preplanning purposes. Since clinical CT-scans are acquired at relatively low resolutions, improved performance can be achieved by registering patient-specific CT images to a high-resolution inner ear model built from accurate 3D segmentations based on micro-CT of human temporal bone specimens. This paper presents a framework based on convolutional neural network for human inner ear segmentation from micro-CT images which can be used to build such a model from an extensive database. The proposed approach employs an auto-context based cascaded 2D U-net architecture with 3D connected component refinement to segment the cochlear scalae, semicircular canals, and the vestibule. The system was formulated on a data set composed of 17 micro-CT from public Hear-EU dataset. A Dice coefficient of 0.90 and Hausdorff distance of 0.74 mm were obtained. The system yielded precise and fast automatic inner-ear segmentations.
Collapse
Affiliation(s)
- Raabid Hussain
- ImViA Laboratory, University of Burgundy Franche Comte, Dijon, France.
| | - Alain Lalande
- ImViA Laboratory, University of Burgundy Franche Comte, Dijon, France.,Medical Imaging Department, University Hospital of Dijon, Dijon, France
| | | | - Caroline Guigou
- ImViA Laboratory, University of Burgundy Franche Comte, Dijon, France.,Otolaryngology Department, University Hospital of Dijon, Dijon, France
| | - Alexis Bozorg Grayeli
- ImViA Laboratory, University of Burgundy Franche Comte, Dijon, France.,Otolaryngology Department, University Hospital of Dijon, Dijon, France
| |
Collapse
|
49
|
Su R, Zhang D, Liu J, Cheng C. MSU-Net: Multi-Scale U-Net for 2D Medical Image Segmentation. Front Genet 2021; 12:639930. [PMID: 33679900 PMCID: PMC7928319 DOI: 10.3389/fgene.2021.639930] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Accepted: 01/20/2021] [Indexed: 11/15/2022] Open
Abstract
Aiming at the limitation of the convolution kernel with a fixed receptive field and unknown prior to optimal network width in U-Net, multi-scale U-Net (MSU-Net) is proposed by us for medical image segmentation. First, multiple convolution sequence is used to extract more semantic features from the images. Second, the convolution kernel with different receptive fields is used to make features more diverse. The problem of unknown network width is alleviated by efficient integration of convolution kernel with different receptive fields. In addition, the multi-scale block is extended to other variants of the original U-Net to verify its universality. Five different medical image segmentation datasets are used to evaluate MSU-Net. A variety of imaging modalities are included in these datasets, such as electron microscopy, dermoscope, ultrasound, etc. Intersection over Union (IoU) of MSU-Net on each dataset are 0.771, 0.867, 0.708, 0.900, and 0.702, respectively. Experimental results show that MSU-Net achieves the best performance on different datasets. Our implementation is available at https://github.com/CN-zdy/MSU_Net.
Collapse
Affiliation(s)
- Run Su
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
- Science Island Branch of Graduate School, University of Science and Technology of China, Hefei, China
| | - Deyun Zhang
- School of Engineering, Anhui Agricultural University, Hefei, China
| | - Jinhuai Liu
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei, China
- Science Island Branch of Graduate School, University of Science and Technology of China, Hefei, China
| | - Chuandong Cheng
- Department of Neurosurgery, The First Affiliated Hospital of University of Science and Technology of China (USTC), Hefei, China
- Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, China
- Anhui Province Key Laboratory of Brain Function and Brain Disease, Hefei, China
| |
Collapse
|
50
|
Zhang Y, Jiang K, Jiang W, Wang N, Wright AJ, Liu A, Wang J. Multi-task convolutional neural network-based design of radio frequency pulse and the accompanying gradients for magnetic resonance imaging. NMR IN BIOMEDICINE 2021; 34:e4443. [PMID: 33200468 DOI: 10.1002/nbm.4443] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 10/21/2020] [Accepted: 10/21/2020] [Indexed: 06/11/2023]
Abstract
Modern MRI systems usually load the predesigned RFs and the accompanying gradients during clinical scans, with minimal adaption to the specific requirements of each scan. Here, we describe a neural network-based method for real-time design of excitation RF pulses and the accompanying gradients' waveforms to achieve spatially two-dimensional selectivity. Nine thousand sets of radio frequency (RF) and gradient waveforms with two-dimensional spatial selectivity were generated as the training dataset using the Shinnar-Le Roux (SLR) method. Neural networks were created and trained with five strategies (TS-1 to TS-5). The neural network-designed RF and gradients were compared with their SLR-designed counterparts and underwent Bloch simulation and phantom imaging to investigate their performances in spin manipulations. We demonstrate a convolutional neural network (TS-5) with multi-task learning to yield both the RF pulses and the accompanying two channels of gradient waveforms that comply with the SLR design, and these design results also provide excitation spatial profiles comparable with SLR pulses in both simulation (normalized root mean square error [NRMSE] of 0.0075 ± 0.0038 over the 400 sets of testing data between TS-5 and SLR) and phantom imaging. The output RF and gradient waveforms between the neural network and SLR methods were also compared, and the joint NRMSE, with both RF and the two channels of gradient waveforms considered, was 0.0098 ± 0.0024 between TS-5 and SLR. The RF and gradients were generated on a commercially available workstation, which took ~130 ms for TS-5. In conclusion, we present a convolutional neural network with multi-task learning, trained with SLR transformation pairs, that is capable of simultaneously generating RF and two channels of gradient waveforms, given the desired spatially two-dimensional excitation profiles.
Collapse
Affiliation(s)
- Yajing Zhang
- MR Clinical Science, Philips Healthcare (Suzhou), Suzhou, China
| | - Ke Jiang
- MSC Clinical & Technical Solutions, Philips Healthcare, Beijing, China
| | - Weiwei Jiang
- MR Clinical Science, Philips Healthcare (Suzhou), Suzhou, China
| | - Nan Wang
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Alan J Wright
- Cancer Research UK Cambridge Institute, University of Cambridge, Li Ka Shing Centre, Cambridge, UK
| | - Ailian Liu
- Department of Radiology, the First Affiliated Hospital of Dalian Medical University, Dalian, China
| | - Jiazheng Wang
- MSC Clinical & Technical Solutions, Philips Healthcare, Beijing, China
| |
Collapse
|