1
|
Hsu LM, Wang S, Chang SW, Lee YL, Yang JT, Lin CP, Tsai YH. Automatic Segmentation of the Cisternal Segment of Trigeminal Nerve on MRI Using Deep Learning. Int J Biomed Imaging 2025; 2025:6694599. [PMID: 39989710 PMCID: PMC11847612 DOI: 10.1155/ijbi/6694599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Revised: 11/14/2024] [Accepted: 01/21/2025] [Indexed: 02/25/2025] Open
Abstract
Purpose: Accurate segmentation of the cisternal segment of the trigeminal nerve plays a critical role in identifying and treating different trigeminal nerve-related disorders, including trigeminal neuralgia (TN). However, the current manual segmentation process is prone to interobserver variability and consumes a significant amount of time. To overcome this challenge, we propose a deep learning-based approach, U-Net, that automatically segments the cisternal segment of the trigeminal nerve. Methods: To evaluate the efficacy of our proposed approach, the U-Net model was trained and validated on healthy control images and tested in on a separate dataset of TN patients. The methods such as Dice, Jaccard, positive predictive value (PPV), sensitivity (SEN), center-of-mass distance (CMD), and Hausdorff distance were used to assess segmentation performance. Results: Our approach achieved high accuracy in segmenting the cisternal segment of the trigeminal nerve, demonstrating robust performance and comparable results to those obtained by participating radiologists. Conclusion: The proposed deep learning-based approach, U-Net, shows promise in improving the accuracy and efficiency of segmenting the cisternal segment of the trigeminal nerve. To the best of our knowledge, this is the first fully automated segmentation method for the trigeminal nerve in anatomic MRI, and it has the potential to aid in the diagnosis and treatment of various trigeminal nerve-related disorders, such as TN.
Collapse
Affiliation(s)
- Li-Ming Hsu
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
- Department of Radiology, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Shuai Wang
- School of Cyberspace, Hangzhou Dianzi University, Hangzhou, China
| | - Sheng-Wei Chang
- Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Chiayi, Chiayi, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Yu-Li Lee
- Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Chiayi, Chiayi, Taiwan
| | - Jen-Tsung Yang
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
- Department of Neurosurgery, Chang Gung Memorial Hospital, Chiayi, Chiayi, Taiwan
| | - Ching-Po Lin
- Institute of Neuroscience, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Education and Research, Taipei City Hospital, Taipei, Taiwan
| | - Yuan-Hsiung Tsai
- Department of Diagnostic Radiology, Chang Gung Memorial Hospital, Chiayi, Chiayi, Taiwan
- College of Medicine, Chang Gung University, Taoyuan, Taiwan
| |
Collapse
|
2
|
Chen Y, Yu L, Wang JY, Panjwani N, Obeid JP, Liu W, Liu L, Kovalchuk N, Gensheimer MF, Vitzthum LK, Beadle BM, Chang DT, Le QT, Han B, Xing L. Adaptive Region-Specific Loss for Improved Medical Image Segmentation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:13408-13421. [PMID: 37363838 PMCID: PMC11346301 DOI: 10.1109/tpami.2023.3289667] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
Defining the loss function is an important part of neural network design and critically determines the success of deep learning modeling. A significant shortcoming of the conventional loss functions is that they weight all regions in the input image volume equally, despite the fact that the system is known to be heterogeneous (i.e., some regions can achieve high prediction performance more easily than others). Here, we introduce a region-specific loss to lift the implicit assumption of homogeneous weighting for better learning. We divide the entire volume into multiple sub-regions, each with an individualized loss constructed for optimal local performance. Effectively, this scheme imposes higher weightings on the sub-regions that are more difficult to segment, and vice versa. Furthermore, the regional false positive and false negative errors are computed for each input image during a training step and the regional penalty is adjusted accordingly to enhance the overall accuracy of the prediction. Using different public and in-house medical image datasets, we demonstrate that the proposed regionally adaptive loss paradigm outperforms conventional methods in the multi-organ segmentations, without any modification to the neural network architecture or additional data preparation.
Collapse
|
3
|
Wu C, Fu L, Tian Z, Liu J, Song J, Guo W, Zhao Y, Zheng D, Jin Y, Yi D, Jiang X. LWMA-Net: Light-weighted morphology attention learning for human embryo grading. Comput Biol Med 2022; 151:106242. [PMID: 36436483 DOI: 10.1016/j.compbiomed.2022.106242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/23/2022] [Accepted: 10/22/2022] [Indexed: 11/16/2022]
Abstract
Visual inspection of embryo morphology is routinely used in embryo assessment and selection. However, due to the complexity of morphologies and large inter- and intra-observer variances among embryologists, manual evaluations remain to be subjective and time-consuming. Thus, we proposed a light-weighted morphology attention learning network (LWMA-Net) for automatic assistance on embryo grading. The LWMA-Net integrated a morphology attention module (MAM) to seek the informative features and their locations and a multiscale fusion module (MFM) to increase the features flowing in the model. The LWMA-Net was trained with a primary set of 3599 embryos from 2318 couples that were clinically enrolled between Sep. 2016 and Dec. 2018, and generated area under the receiver operating characteristic curves (AUCs) of 96.88% and 97.58% on 4- and 3-category gradings, respectively. An independent test set comprises 691 embryos from 321 couples between Jan. 2019 and Jan. 2021 were used to test the assisted fertility values on the embryo grading. Five experienced embryologists were invited to regrade the embryos in the independent set with and without the aid of the LWMA-Net three months apart. Embryologists aided by our LWMA-Net significantly improved their grading capabilities with average AUCs improved by 4.98%-5.32% on 4- and 3-category grading tasks, respectively, which suggests good potential of our LWMA-Net on assisted human reproduction.
Collapse
Affiliation(s)
- Chongwei Wu
- Department of Biomedical Engineering, School of Intelligent Medicine, China Medical University, Shenyang, 110122, China
| | - Langyuan Fu
- Department of Biomedical Engineering, School of Intelligent Medicine, China Medical University, Shenyang, 110122, China
| | - Zhiying Tian
- Key Laboratory of Reproductive Health and Medical Genetics, National Health and Family Planning Commission, Liaoning Research Institute of Family Planning, Shenyang, 110031, China
| | - Jiao Liu
- Department of Reproductive Medicine, Dalian Municipal Women and Children's Medical Center (Group), Dalian, 116083, China
| | - Jiangdian Song
- School of Medical Informatics, China Medical University, Shenyang, 110122, China
| | - Wei Guo
- College of Computer Science, Shenyang Aerospace University, Shenyang, 110136, China
| | - Yu Zhao
- Department of Reproductive Medicine, Dalian Municipal Women and Children's Medical Center (Group), Dalian, 116083, China
| | - Duo Zheng
- Department of Biomedical Engineering, School of Intelligent Medicine, China Medical University, Shenyang, 110122, China
| | - Ying Jin
- Key Laboratory of Reproductive Health and Medical Genetics, National Health and Family Planning Commission, Liaoning Research Institute of Family Planning, Shenyang, 110031, China
| | - Dongxu Yi
- Key Laboratory of Reproductive Health and Medical Genetics, National Health and Family Planning Commission, Liaoning Research Institute of Family Planning, Shenyang, 110031, China
| | - Xiran Jiang
- Department of Biomedical Engineering, School of Intelligent Medicine, China Medical University, Shenyang, 110122, China.
| |
Collapse
|
4
|
Gao F, Hu M, Zhong ME, Feng S, Tian X, Meng X, Ni-jia-ti MYDL, Huang Z, Lv M, Song T, Zhang X, Zou X, Wu X. Segmentation only uses sparse annotations: Unified weakly and semi-supervised learning in medical images. Med Image Anal 2022; 80:102515. [DOI: 10.1016/j.media.2022.102515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2021] [Revised: 04/10/2022] [Accepted: 06/10/2022] [Indexed: 11/30/2022]
|
5
|
Hsu LM, Wang S, Walton L, Wang TWW, Lee SH, Shih YYI. 3D U-Net Improves Automatic Brain Extraction for Isotropic Rat Brain Magnetic Resonance Imaging Data. Front Neurosci 2021; 15:801008. [PMID: 34975392 PMCID: PMC8716693 DOI: 10.3389/fnins.2021.801008] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2021] [Accepted: 11/15/2021] [Indexed: 12/24/2022] Open
Abstract
Brain extraction is a critical pre-processing step in brain magnetic resonance imaging (MRI) analytical pipelines. In rodents, this is often achieved by manually editing brain masks slice-by-slice, a time-consuming task where workloads increase with higher spatial resolution datasets. We recently demonstrated successful automatic brain extraction via a deep-learning-based framework, U-Net, using 2D convolutions. However, such an approach cannot make use of the rich 3D spatial-context information from volumetric MRI data. In this study, we advanced our previously proposed U-Net architecture by replacing all 2D operations with their 3D counterparts and created a 3D U-Net framework. We trained and validated our model using a recently released CAMRI rat brain database acquired at isotropic spatial resolution, including T2-weighted turbo-spin-echo structural MRI and T2*-weighted echo-planar-imaging functional MRI. The performance of our 3D U-Net model was compared with existing rodent brain extraction tools, including Rapid Automatic Tissue Segmentation, Pulse-Coupled Neural Network, SHape descriptor selected External Regions after Morphologically filtering, and our previously proposed 2D U-Net model. 3D U-Net demonstrated superior performance in Dice, Jaccard, center-of-mass distance, Hausdorff distance, and sensitivity. Additionally, we demonstrated the reliability of 3D U-Net under various noise levels, evaluated the optimal training sample sizes, and disseminated all source codes publicly, with a hope that this approach will benefit rodent MRI research community. Significant Methodological Contribution: We proposed a deep-learning-based framework to automatically identify the rodent brain boundaries in MRI. With a fully 3D convolutional network model, 3D U-Net, our proposed method demonstrated improved performance compared to current automatic brain extraction methods, as shown in several qualitative metrics (Dice, Jaccard, PPV, SEN, and Hausdorff). We trust that this tool will avoid human bias and streamline pre-processing steps during 3D high resolution rodent brain MRI data analysis. The software developed herein has been disseminated freely to the community.
Collapse
Affiliation(s)
- Li-Ming Hsu
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Department of Radiology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,*Correspondence: Li-Ming Hsu,
| | - Shuai Wang
- School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, China
| | - Lindsay Walton
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Department of Radiology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Tzu-Wen Winnie Wang
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Department of Radiology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Sung-Ho Lee
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Department of Radiology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Yen-Yu Ian Shih
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Department of Radiology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States,Yen-Yu Ian Shih,
| |
Collapse
|
6
|
Zhang Z, Zhao T, Gay H, Zhang W, Sun B. Semi-supervised semantic segmentation of prostate and organs-at-risk on 3D pelvic CT images. Biomed Phys Eng Express 2021; 7. [PMID: 34525455 DOI: 10.1088/2057-1976/ac26e8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Accepted: 09/15/2021] [Indexed: 12/24/2022]
Abstract
The recent development of deep learning approaches has revoluted medical data processing, including semantic segmentation, by dramatically improving performance. Automated segmentation can assist radiotherapy treatment planning by saving manual contouring efforts and reducing intra-observer and inter-observer variations. However, training effective deep learning models usually Requires a large amount of high-quality labeled data, often costly to collect. We developed a novel semi-supervised adversarial deep learning approach for 3D pelvic CT image semantic segmentation. Unlike supervised deep learning methods, the new approach can utilize both annotated and un-annotated data for training. It generates un-annotated synthetic data by a data augmentation scheme using generative adversarial networks (GANs). We applied the new approach to segmenting multiple organs in male pelvic CT images. CT images without annotations and GAN-synthesized un-annotated images were used in semi-supervised learning. Experimental results, evaluated by three metrics (Dice similarity coefficient, average Hausdorff distance, and average surface Hausdorff distance), showed that the new method achieved comparable performance with substantially fewer annotated images or better performance with the same amount of annotated data, outperforming the existing state-of-the-art methods.
Collapse
Affiliation(s)
- Zhuangzhuang Zhang
- Department of Computer Science and Engineering, Washington University, One Brookings Drive, Campus Box 1045, St. Louis, MO 63130, United States of America
| | - Tianyu Zhao
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO 63110, United States of America
| | - Hiram Gay
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO 63110, United States of America
| | - Weixiong Zhang
- Department of Computer Science and Engineering, Washington University, One Brookings Drive, Campus Box 1045, St. Louis, MO 63130, United States of America
| | - Baozhou Sun
- Department of Radiation Oncology, Washington University School of Medicine, 4921 Parkview Place, Campus Box 8224, St. Louis, MO 63110, United States of America
| |
Collapse
|
7
|
Radiotherapy planning parameters correlate with changes in the peripheral immune status of patients undergoing curative radiotherapy for localized prostate cancer. Cancer Immunol Immunother 2021; 71:541-552. [PMID: 34269847 PMCID: PMC8854140 DOI: 10.1007/s00262-021-03002-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2021] [Accepted: 06/28/2021] [Indexed: 12/12/2022]
Abstract
Purpose The influence of radiotherapy on patient immune cell subsets has been established by several groups. Following a previously published analysis of immune changes during and after curative radiotherapy for prostate cancer, this analysis focused on describing correlations of changes of immune cell subsets with radiation treatment parameters. Patients and methods For 13 patients treated in a prospective trial with radiotherapy to the prostate region (primary analysis) and five patients treated with radiotherapy to prostate and pelvic nodal regions (exploratory analysis), already published immune monitoring data were correlated with clinical data as well as radiation planning parameters such as clinical target volume (CTV) and volumes receiving 20 Gy (V20) for newly contoured volumes of pelvic blood vessels and bone marrow. Results Most significant changes among immune cell subsets were observed at the end of radiotherapy. In contrast, correlations of age and CD8+ subsets (effector and memory cells) were observed early during and 3 months after radiotherapy. Ratios of T cells and T cell proliferation compared to baseline correlated with CTV. Early changes in regulatory T cells (Treg cells) and CD8+ effector T cells correlated with V20 of blood vessels and bone volumes. Conclusions Patient age as well as radiotherapy planning parameters correlated with immune changes during radiotherapy. Larger irradiated volumes seem to correlate with early suppression of anti-cancer immunity. For immune cell analysis during normofractionated radiotherapy and correlations with treatment planning parameters, different time points should be looked at in future projects. Trial registration number: NCT01376674, 20.06.2011 Supplementary Information The online version contains supplementary material available at 10.1007/s00262-021-03002-6.
Collapse
|
8
|
Xu X, Lian C, Wang S, Zhu T, Chen RC, Wang AZ, Royce TJ, Yap PT, Shen D, Lian J. Asymmetric multi-task attention network for prostate bed segmentation in computed tomography images. Med Image Anal 2021; 72:102116. [PMID: 34217953 DOI: 10.1016/j.media.2021.102116] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2020] [Revised: 05/18/2021] [Accepted: 05/21/2021] [Indexed: 10/21/2022]
Abstract
Post-prostatectomy radiotherapy requires accurate annotation of the prostate bed (PB), i.e., the residual tissue after the operative removal of the prostate gland, to minimize side effects on surrounding organs-at-risk (OARs). However, PB segmentation in computed tomography (CT) images is a challenging task, even for experienced physicians. This is because PB is almost a "virtual" target with non-contrast boundaries and highly variable shapes depending on neighboring OARs. In this work, we propose an asymmetric multi-task attention network (AMTA-Net) for the concurrent segmentation of PB and surrounding OARs. Our AMTA-Net mimics experts in delineating the non-contrast PB by explicitly leveraging its critical dependency on the neighboring OARs (i.e., the bladder and rectum), which are relatively easy to distinguish in CT images. Specifically, we first adopt a U-Net as the backbone network for the low-level (or prerequisite) task of the OAR segmentation. Then, we build an attention sub-network upon the backbone U-Net with a series of cascaded attention modules, which can hierarchically transfer the OAR features and adaptively learn discriminative representations for the high-level (or primary) task of the PB segmentation. We comprehensively evaluate the proposed AMTA-Net on a clinical dataset composed of 186 CT images. According to the experimental results, our AMTA-Net significantly outperforms current clinical state-of-the-arts (i.e., atlas-based segmentation methods), indicating the value of our method in reducing time and labor in the clinical workflow. Our AMTA-Net also presents better performance than the technical state-of-the-arts (i.e., the deep learning-based segmentation methods), especially for the most indistinguishable and clinically critical part of the PB boundaries. Source code is released at https://github.com/superxuang/amta-net.
Collapse
Affiliation(s)
- Xuanang Xu
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Chunfeng Lian
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, China
| | - Shuai Wang
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai, Shandong 264209, China
| | - Tong Zhu
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Ronald C Chen
- Department of Radiation Oncology, University of Kansas Medical Center, Kansas City, KS 66160, USA
| | - Andrew Z Wang
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Trevor J Royce
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200030, China; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| | - Jun Lian
- Department of Radiation Oncology, The University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA.
| |
Collapse
|
9
|
Wang S, Cong Y, Zhu H, Chen X, Qu L, Fan H, Zhang Q, Liu M. Multi-Scale Context-Guided Deep Network for Automated Lesion Segmentation With Endoscopy Images of Gastrointestinal Tract. IEEE J Biomed Health Inform 2021; 25:514-525. [PMID: 32750912 DOI: 10.1109/jbhi.2020.2997760] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Accurate lesion segmentation based on endoscopy images is a fundamental task for the automated diagnosis of gastrointestinal tract (GI Tract) diseases. Previous studies usually use hand-crafted features for representing endoscopy images, while feature definition and lesion segmentation are treated as two standalone tasks. Due to the possible heterogeneity between features and segmentation models, these methods often result in sub-optimal performance. Several fully convolutional networks have been recently developed to jointly perform feature learning and model training for GI Tract disease diagnosis. However, they generally ignore local spatial details of endoscopy images, as down-sampling operations (e.g., pooling and convolutional striding) may result in irreversible loss of image spatial information. To this end, we propose a multi-scale context-guided deep network (MCNet) for end-to-end lesion segmentation of endoscopy images in GI Tract, where both global and local contexts are captured as guidance for model training. Specifically, one global subnetwork is designed to extract the global structure and high-level semantic context of each input image. Then we further design two cascaded local subnetworks based on output feature maps of the global subnetwork, aiming to capture both local appearance information and relatively high-level semantic information in a multi-scale manner. Those feature maps learned by three subnetworks are further fused for the subsequent task of lesion segmentation. We have evaluated the proposed MCNet on 1,310 endoscopy images from the public EndoVis-Ab and CVC-ClinicDB datasets for abnormal segmentation and polyp segmentation, respectively. Experimental results demonstrate that MCNet achieves [Formula: see text] and [Formula: see text] mean intersection over union (mIoU) on two datasets, respectively, outperforming several state-of-the-art approaches in automated lesion segmentation with endoscopy images of GI Tract.
Collapse
|
10
|
Hsu LM, Wang S, Ranadive P, Ban W, Chao THH, Song S, Cerri DH, Walton LR, Broadwater MA, Lee SH, Shen D, Shih YYI. Automatic Skull Stripping of Rat and Mouse Brain MRI Data Using U-Net. Front Neurosci 2020; 14:568614. [PMID: 33117118 PMCID: PMC7575753 DOI: 10.3389/fnins.2020.568614] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 08/13/2020] [Indexed: 11/13/2022] Open
Abstract
Accurate removal of magnetic resonance imaging (MRI) signal outside the brain, a.k.a., skull stripping, is a key step in the brain image pre-processing pipelines. In rodents, this is mostly achieved by manually editing a brain mask, which is time-consuming and operator dependent. Automating this step is particularly challenging in rodents as compared to humans, because of differences in brain/scalp tissue geometry, image resolution with respect to brain-scalp distance, and tissue contrast around the skull. In this study, we proposed a deep-learning-based framework, U-Net, to automatically identify the rodent brain boundaries in MR images. The U-Net method is robust against inter-subject variability and eliminates operator dependence. To benchmark the efficiency of this method, we trained and validated our model using both in-house collected and publicly available datasets. In comparison to current state-of-the-art methods, our approach achieved superior averaged Dice similarity coefficient to ground truth T2-weighted rapid acquisition with relaxation enhancement and T2∗-weighted echo planar imaging data in both rats and mice (all p < 0.05), demonstrating robust performance of our approach across various MRI protocols.
Collapse
Affiliation(s)
- Li-Ming Hsu
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Neurology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Radiology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Shuai Wang
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Radiology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Paridhi Ranadive
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Woomi Ban
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Tzu-Hao Harry Chao
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Neurology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Sheng Song
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Neurology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Domenic Hayden Cerri
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Neurology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Lindsay R. Walton
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Neurology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Margaret A. Broadwater
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Neurology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Sung-Ho Lee
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Neurology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Dinggang Shen
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Radiology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Yen-Yu Ian Shih
- Center for Animal Magnetic Resonance Imaging, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Biomedical Research Imaging Center, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
- Department of Neurology, The University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| |
Collapse
|