1
|
Chen Q, Zhang J, Meng R, Zhou L, Li Z, Feng Q, Shen D. Modality-Specific Information Disentanglement From Multi-Parametric MRI for Breast Tumor Segmentation and Computer-Aided Diagnosis. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1958-1971. [PMID: 38206779 DOI: 10.1109/tmi.2024.3352648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
Breast cancer is becoming a significant global health challenge, with millions of fatalities annually. Magnetic Resonance Imaging (MRI) can provide various sequences for characterizing tumor morphology and internal patterns, and becomes an effective tool for detection and diagnosis of breast tumors. However, previous deep-learning based tumor segmentation methods from multi-parametric MRI still have limitations in exploring inter-modality information and focusing task-informative modality/modalities. To address these shortcomings, we propose a Modality-Specific Information Disentanglement (MoSID) framework to extract both inter- and intra-modality attention maps as prior knowledge for guiding tumor segmentation. Specifically, by disentangling modality-specific information, the MoSID framework provides complementary clues for the segmentation task, by generating modality-specific attention maps to guide modality selection and inter-modality evaluation. Our experiments on two 3D breast datasets and one 2D prostate dataset demonstrate that the MoSID framework outperforms other state-of-the-art multi-modality segmentation methods, even in the cases of missing modalities. Based on the segmented lesions, we further train a classifier to predict the patients' response to radiotherapy. The prediction accuracy is comparable to the case of using manually-segmented tumors for treatment outcome prediction, indicating the robustness and effectiveness of the proposed segmentation method. The code is available at https://github.com/Qianqian-Chen/MoSID.
Collapse
|
2
|
Morano J, Aresta G, Grechenig C, Schmidt-Erfurth U, Bogunovic H. Deep Multimodal Fusion of Data With Heterogeneous Dimensionality via Projective Networks. IEEE J Biomed Health Inform 2024; 28:2235-2246. [PMID: 38206782 DOI: 10.1109/jbhi.2024.3352970] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2024]
Abstract
The use of multimodal imaging has led to significant improvements in the diagnosis and treatment of many diseases. Similar to clinical practice, some works have demonstrated the benefits of multimodal fusion for automatic segmentation and classification using deep learning-based methods. However, current segmentation methods are limited to fusion of modalities with the same dimensionality (e.g., 3D + 3D, 2D + 2D), which is not always possible, and the fusion strategies implemented by classification methods are incompatible with localization tasks. In this work, we propose a novel deep learning-based framework for the fusion of multimodal data with heterogeneous dimensionality (e.g., 3D + 2D) that is compatible with localization tasks. The proposed framework extracts the features of the different modalities and projects them into the common feature subspace. The projected features are then fused and further processed to obtain the final prediction. The framework was validated on the following tasks: segmentation of geographic atrophy (GA), a late-stage manifestation of age-related macular degeneration, and segmentation of retinal blood vessels (RBV) in multimodal retinal imaging. Our results show that the proposed method outperforms the state-of-the-art monomodal methods on GA and RBV segmentation by up to 3.10% and 4.64% Dice, respectively.
Collapse
|
3
|
Wang Z, Zhu H, Huang B, Wang Z, Lu W, Chen N, Wang Y. M-MSSEU: source-free domain adaptation for multi-modal stroke lesion segmentation using shadowed sets and evidential uncertainty. Health Inf Sci Syst 2023; 11:46. [PMID: 37780536 PMCID: PMC10539264 DOI: 10.1007/s13755-023-00247-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 09/08/2023] [Indexed: 10/03/2023] Open
Abstract
Due to the unavailability of source domain data encountered in unsupervised domain adaptation, there has been an increasing number of studies on source-free domain adaptation (SFDA) in recent years. To better solve the SFDA problem and effectively leverage the multi-modal information in medical images, this paper presents a novel SFDA method for multi-modal stroke lesion segmentation in which evidential deep learning instead of convolutional neural network. Specifically, for multi-modal stroke images, we design a multi-modal opinion fusion module which uses Dempster-Shafer evidence theory for decision fusion of different modalities. Besides, for the SFDA problem, we use the pseudo label learning method, which obtains pseudo labels from the pre-trained source model to perform the adaptation process. To solve the unreliability of pseudo label caused by domain shift, we propose a pseudo label filtering scheme using shadowed sets theory and a pseudo label refining scheme using evidential uncertainty. These two schemes can automatically extract unreliable parts in pseudo labels and jointly improve the quality of pseudo labels with low computational costs. Experiments on two multi-modal stroke lesion datasets demonstrate the superiority of our method over other state-of-the-art SFDA methods.
Collapse
Affiliation(s)
- Zhicheng Wang
- School of Information Science and Engineering, East China University of Science and Technology, No.130 Meilong Road, Shanghai, 200237 China
| | - Hongqing Zhu
- School of Information Science and Engineering, East China University of Science and Technology, No.130 Meilong Road, Shanghai, 200237 China
| | - Bingcang Huang
- Department of Radiology, Gongli Hospital of Shanghai Pudong New Area, Shanghai, 200135 China
| | - Ziying Wang
- School of Information Science and Engineering, East China University of Science and Technology, No.130 Meilong Road, Shanghai, 200237 China
| | - Weiping Lu
- Department of Radiology, Gongli Hospital of Shanghai Pudong New Area, Shanghai, 200135 China
| | - Ning Chen
- School of Information Science and Engineering, East China University of Science and Technology, No.130 Meilong Road, Shanghai, 200237 China
| | - Ying Wang
- Shanghai Health Commission Key Lab of Artificial Intelligence (AI)-Based Management of Inflammation and Chronic Diseases, Sino-French Cooperative Central Lab, Gongli Hospital of Shanghai Pudong New Area, Shanghai, 200135 China
| |
Collapse
|
4
|
Petrov Y, Malik B, Fredrickson J, Jemaa S, Carano RAD. Deep Ensembles Are Robust to Occasional Catastrophic Failures of Individual DNNs for Organs Segmentations in CT Images. J Digit Imaging 2023; 36:2060-2074. [PMID: 37291384 PMCID: PMC10502003 DOI: 10.1007/s10278-023-00857-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 05/15/2023] [Accepted: 05/18/2023] [Indexed: 06/10/2023] Open
Abstract
Deep neural networks (DNNs) have recently showed remarkable performance in various computer vision tasks, including classification and segmentation of medical images. Deep ensembles (an aggregated prediction of multiple DNNs) were shown to improve a DNN's performance in various classification tasks. Here we explore how deep ensembles perform in the image segmentation task, in particular, organ segmentations in CT (Computed Tomography) images. Ensembles of V-Nets were trained to segment multiple organs using several in-house and publicly available clinical studies. The ensembles segmentations were tested on images from a different set of studies, and the effects of ensemble size as well as other ensemble parameters were explored for various organs. Compared to single models, Deep Ensembles significantly improved the average segmentation accuracy, especially for those organs where the accuracy was lower. More importantly, Deep Ensembles strongly reduced occasional "catastrophic" segmentation failures characteristic of single models and variability of the segmentation accuracy from image to image. To quantify this we defined the "high risk images": images for which at least one model produced an outlier metric (performed in the lower 5% percentile). These images comprised about 12% of the test images across all organs. Ensembles performed without outliers for 68%-100% of the "high risk images" depending on the performance metric used.
Collapse
Affiliation(s)
- Yury Petrov
- Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA.
| | - Bilal Malik
- Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA
| | | | - Skander Jemaa
- Genentech, Inc., 1 DNA Way, South San Francisco, CA, 94080, USA
| | | |
Collapse
|
5
|
Fischer M, Hepp T, Gatidis S, Yang B. Self-supervised contrastive learning with random walks for medical image segmentation with limited annotations. Comput Med Imaging Graph 2023; 104:102174. [PMID: 36640485 DOI: 10.1016/j.compmedimag.2022.102174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 12/06/2022] [Accepted: 12/27/2022] [Indexed: 01/11/2023]
Abstract
Medical image segmentation has seen significant progress through the use of supervised deep learning. Hereby, large annotated datasets were employed to reliably segment anatomical structures. To reduce the requirement for annotated training data, self-supervised pre-training strategies on non-annotated data were designed. Especially contrastive learning schemes operating on dense pixel-wise representations have been introduced as an effective tool. In this work, we expand on this strategy and leverage inherent anatomical similarities in medical imaging data. We apply our approach to the task of semantic segmentation in a semi-supervised setting with limited amounts of annotated volumes. Trained alongside a segmentation loss in one single training stage, a contrastive loss aids to differentiate between salient anatomical regions that conform to the available annotations. Our approach builds upon the work of Jabri et al. (2020), who proposed cyclical contrastive random walks (CCRW) for self-supervision on palindromes of video frames. We adapt this scheme to operate on entries of paired embedded image slices. Using paths of cyclical random walks bypasses the need for negative samples, as commonly used in contrastive approaches, enabling the algorithm to discriminate among relevant salient (anatomical) regions implicitly. Further, a multi-level supervision strategy is employed, ensuring adequate representations of local and global characteristics of anatomical structures. The effectiveness of reducing the amount of required annotations is shown on three MRI datasets. A median increase of 8.01 and 5.90 pp in the Dice Similarity Coefficient (DSC) compared to our baseline could be achieved across all three datasets in the case of one and two available annotated examples per dataset.
Collapse
Affiliation(s)
- Marc Fischer
- Institute of Signal Processing and System Theory, University of Stuttgart, 70550 Stuttgart, Germany.
| | - Tobias Hepp
- Max Planck Institute for Intelligent Systems, 72076 Tübingen, Germany
| | - Sergios Gatidis
- Max Planck Institute for Intelligent Systems, 72076 Tübingen, Germany
| | - Bin Yang
- Institute of Signal Processing and System Theory, University of Stuttgart, 70550 Stuttgart, Germany
| |
Collapse
|
6
|
Fully Automated Segmentation Models of Supratentorial Meningiomas Assisted by Inclusion of Normal Brain Images. J Imaging 2022; 8:jimaging8120327. [PMID: 36547492 PMCID: PMC9782766 DOI: 10.3390/jimaging8120327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 12/09/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022] Open
Abstract
To train an automatic brain tumor segmentation model, a large amount of data is required. In this paper, we proposed a strategy to overcome the limited amount of clinically collected magnetic resonance image (MRI) data regarding meningiomas by pre-training a model using a larger public dataset of MRIs of gliomas and augmenting our meningioma training set with normal brain MRIs. Pre-operative MRIs of 91 meningioma patients (171 MRIs) and 10 non-meningioma patients (normal brains) were collected between 2016 and 2019. Three-dimensional (3D) U-Net was used as the base architecture. The model was pre-trained with BraTS 2019 data, then fine-tuned with our datasets consisting of 154 meningioma MRIs and 10 normal brain MRIs. To increase the utility of the normal brain MRIs, a novel balanced Dice loss (BDL) function was used instead of the conventional soft Dice loss function. The model performance was evaluated using the Dice scores across the remaining 17 meningioma MRIs. The segmentation performance of the model was sequentially improved via the pre-training and inclusion of normal brain images. The Dice scores improved from 0.72 to 0.76 when the model was pre-trained. The inclusion of normal brain MRIs to fine-tune the model improved the Dice score; it increased to 0.79. When employing BDL as the loss function, the Dice score reached 0.84. The proposed learning strategy for U-net showed potential for use in segmenting meningioma lesions.
Collapse
|
7
|
Barzegar Z, Jamzad M. An Efficient Optimization Approach for Glioma Tumor Segmentation in Brain MRI. J Digit Imaging 2022; 35:1634-1647. [PMID: 35995900 PMCID: PMC9712883 DOI: 10.1007/s10278-022-00655-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 04/22/2022] [Accepted: 05/06/2022] [Indexed: 11/29/2022] Open
Abstract
Glioma is an aggressive type of cancer that develops in the brain or spinal cord. Due to many differences in its shape and appearance, accurate segmentation of glioma for identifying all parts of the tumor and its surrounding cancerous tissues is a challenging task. In recent researches, the combination of multi-atlas segmentation and machine learning methods provides robust and accurate results by learning from annotated atlas datasets. To overcome the side effects of limited existing information on atlas-based segmentation, and the long training phase of learning methods, we proposed a semi-supervised unified framework for multi-label segmentation that formulates this problem in terms of a Markov Random Field energy optimization on a parametric graph. To evaluate the proposed framework, we apply it to publicly available BRATS datasets, including low- and high-grade glioma tumors. Experimental results indicate competitive performance compared to the state-of-the-art methods. Compared with the top ranked methods, the proposed framework obtains the best dice score for segmenting of "whole tumor" (WT), "tumor core" (TC ) and "enhancing active tumor" (ET) regions. The achieved accuracy is 94[Formula: see text] characterized by the mean dice score. The motivation of using MRF graph is to map the segmentation problem to an optimization model in a graphical environment. Therefore, by defining perfect graph structure and optimum constraints and flows in the continuous max-flow model, the segmentation is performed precisely.
Collapse
Affiliation(s)
- Zeynab Barzegar
- Present Address: Sharif University of Technology, Tehran, Iran
| | - Mansour Jamzad
- Present Address: Sharif University of Technology, Tehran, Iran
| |
Collapse
|
8
|
Abstract
AbstractBrain tumor segmentation is one of the most challenging problems in medical image analysis. The goal of brain tumor segmentation is to generate accurate delineation of brain tumor regions. In recent years, deep learning methods have shown promising performance in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of deep learning based methods have been applied to brain tumor segmentation and achieved promising results. Considering the remarkable breakthroughs made by state-of-the-art technologies, we provide this survey with a comprehensive study of recently developed deep learning based brain tumor segmentation techniques. More than 150 scientific papers are selected and discussed in this survey, extensively covering technical aspects such as network architecture design, segmentation under imbalanced conditions, and multi-modality processes. We also provide insightful discussions for future development directions.
Collapse
|
9
|
Niyas S, Pawan S, Anand Kumar M, Rajan J. Medical image segmentation with 3D convolutional neural networks: A survey. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.065] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
10
|
Das S, Bose S, Nayak GK, Saxena S. Deep learning-based ensemble model for brain tumor segmentation using multi-parametric MR scans. OPEN COMPUTER SCIENCE 2022. [DOI: 10.1515/comp-2022-0242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Abstract
Glioma is a type of fast-growing brain tumor in which the shape, size, and location of the tumor vary from patient to patient. Manual extraction of a region of interest (tumor) with the help of a radiologist is a very difficult and time-consuming task. To overcome this problem, we proposed a fully automated deep learning-based ensemble method of brain tumor segmentation on four different 3D multimodal magnetic resonance imaging (MRI) scans. The segmentation is performed by three most efficient encoder–decoder deep models for segmentation and their results are measured through the well-known segmentation metrics. Then, a statistical analysis of the models was performed and an ensemble model is designed by considering the highest Matthews correlation coefficient using a particular MRI modality. There are two main contributions of the article: first the detailed comparison of the three models, and second proposing an ensemble model by combining the three models based on their segmentation accuracy. The model is evaluated using the brain tumor segmentation (BraTS) 2017 dataset and the F1 score of the final combined model is found to be 0.92, 0.95, 0.93, and 0.84 for whole tumor, core, enhancing tumor, and edema sub-tumor, respectively. Experimental results show that the model outperforms the state of the art.
Collapse
Affiliation(s)
- Suchismita Das
- Computer Science & Engineering, IIIT Bhubaneswar , Bhubaneswar , Odisha, 751003 , India
- KIIT University , Odisha , 751024 , India
| | - Srijib Bose
- Computer Science & Engineering, KIIT University , Odisha , 751024 , India
| | - Gopal Krishna Nayak
- Computer Science & Engineering, IIIT Bhubaneswar , Bhubaneswar , Odisha, 751003 , India
| | - Sanjay Saxena
- Computer Science & Engineering, IIIT Bhubaneswar , Bhubaneswar , Odisha, 751003 , India
| |
Collapse
|
11
|
Noothout JMH, Lessmann N, van Eede MC, van Harten LD, Sogancioglu E, Heslinga FG, Veta M, van Ginneken B, Išgum I. Knowledge distillation with ensembles of convolutional neural networks for medical image segmentation. J Med Imaging (Bellingham) 2022; 9:052407. [DOI: 10.1117/1.jmi.9.5.052407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Accepted: 05/12/2022] [Indexed: 11/14/2022] Open
Affiliation(s)
- Julia M. H. Noothout
- Amsterdam University Medical Center, University of Amsterdam, Department of Biomedical Engineering a
| | - Nikolas Lessmann
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen
| | - Matthijs C. van Eede
- Amsterdam University Medical Center, University of Amsterdam, Department of Biomedical Engineering a
| | - Louis D. van Harten
- Amsterdam University Medical Center, University of Amsterdam, Department of Biomedical Engineering a
| | - Ecem Sogancioglu
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen
| | - Friso G. Heslinga
- Eindhoven University of Technology, Department of Biomedical Engineering, Eindhoven
| | - Mitko Veta
- Eindhoven University of Technology, Department of Biomedical Engineering, Eindhoven
| | - Bram van Ginneken
- Radboud University Medical Center, Department of Medical Imaging, Nijmegen
| | - Ivana Išgum
- Amsterdam University Medical Center, University of Amsterdam, Department of Biomedical Engineering a
| |
Collapse
|
12
|
Balwant M. A Review on Convolutional Neural Networks for Brain Tumor Segmentation: Methods, Datasets, Libraries, and Future Directions. Ing Rech Biomed 2022. [DOI: 10.1016/j.irbm.2022.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
13
|
A novel 2-phase residual U-net algorithm combined with optimal mass transportation for 3D brain tumor detection and segmentation. Sci Rep 2022; 12:6452. [PMID: 35440793 PMCID: PMC9018750 DOI: 10.1038/s41598-022-10285-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2021] [Accepted: 04/06/2022] [Indexed: 01/10/2023] Open
Abstract
Utilizing the optimal mass transportation (OMT) technique to convert an irregular 3D brain image into a cube, a required input format for a U-net algorithm, is a brand new idea for medical imaging research. We develop a cubic volume-measure-preserving OMT (V-OMT) model for the implementation of this conversion. The contrast-enhanced histogram equalization grayscale of fluid-attenuated inversion recovery (FLAIR) in a brain image creates the corresponding density function. We then propose an effective two-phase residual U-net algorithm combined with the V-OMT algorithm for training and validation. First, we use the residual U-net and V-OMT algorithms to precisely predict the whole tumor (WT) region. Second, we expand this predicted WT region with dilation and create a smooth function by convolving the step-like function associated with the WT region in the brain image with a \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$5\times 5\times 5$$\end{document}5×5×5 blur tensor. Then, a new V-OMT algorithm with mesh refinement is constructed to allow the residual U-net algorithm to effectively train Net1–Net3 models. Finally, we propose ensemble voting postprocessing to validate the final labels of brain images. We randomly chose 1000 and 251 brain samples from the Brain Tumor Segmentation (BraTS) 2021 training dataset, which contains 1251 samples, for training and validation, respectively. The Dice scores of the WT, tumor core (TC) and enhanced tumor (ET) regions for validation computed by Net1–Net3 were 0.93705, 0.90617 and 0.87470, respectively. A significant improvement in brain tumor detection and segmentation with higher accuracy is achieved.
Collapse
|
14
|
Alqazzaz S, Sun X, Nokes LD, Yang H, Yang Y, Xu R, Zhang Y, Yang X. Combined Features in Region of Interest for Brain Tumor Segmentation. J Digit Imaging 2022; 35:938-946. [PMID: 35293605 PMCID: PMC9485383 DOI: 10.1007/s10278-022-00602-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 01/27/2022] [Accepted: 02/03/2022] [Indexed: 11/03/2022] Open
Abstract
Diagnosis of brain tumor gliomas is a challenging task in medical image analysis due to its complexity, the less regularity of tumor structures, and the diversity of tissue textures and shapes. Semantic segmentation approaches using deep learning have consistently outperformed the previous methods in this challenging task. However, deep learning is insufficient to provide the required local features related to tissue texture changes due to tumor growth. This paper designs a hybrid method arising from this need, which incorporates machine-learned and hand-crafted features. A semantic segmentation network (SegNet) is used to generate the machine-learned features, while the grey-level co-occurrence matrix (GLCM)-based texture features construct the hand-crafted features. In addition, the proposed approach only takes the region of interest (ROI), which represents the extension of the complete tumor structure, as input, and suppresses the intensity of other irrelevant area. A decision tree (DT) is used to classify the pixels of ROI MRI images into different parts of tumors, i.e. edema, necrosis and enhanced tumor. The method was evaluated on BRATS 2017 dataset. The results demonstrate that the proposed model provides promising segmentation in brain tumor structure. The F-measures for automatic brain tumor segmentation against ground truth are 0.98, 0.75 and 0.69 for whole tumor, core and enhanced tumor, respectively.
Collapse
Affiliation(s)
- Salma Alqazzaz
- School of Engineering, Cardiff University, Cardiff, CF24 3AA, UK.,Department of Physics College of Science for Women, Baghdad University, Baghdad, Iraq
| | - Xianfang Sun
- School of Computer Science and Informatics, Cardiff University, CF24 3AA, Cardiff, UK
| | - Len Dm Nokes
- School of Engineering, Cardiff University, Cardiff, CF24 3AA, UK
| | - Hong Yang
- Department of Radiology, The Second People's Hospital of Guangxi Zhuang Autonomous Region, Guilin, 541002, PR China
| | - Yingxia Yang
- Department of Radiology, The People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, 530021, PR China
| | - Ronghua Xu
- Centre of Information and Network Management, The People's Hospital of Guangxi Zhuang Autonomous Region, Nanning, 530021, PR China
| | - Yanqiang Zhang
- State Information Center of China, Beijing, 100045, PR China
| | - Xin Yang
- School of Engineering, Cardiff University, Cardiff, CF24 3AA, UK.
| |
Collapse
|
15
|
Kaur G, Rana PS, Arora V. State-of-the-art techniques using pre-operative brain MRI scans for survival prediction of glioblastoma multiforme patients and future research directions. Clin Transl Imaging 2022; 10:355-389. [PMID: 35261910 PMCID: PMC8891433 DOI: 10.1007/s40336-022-00487-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/15/2022] [Indexed: 11/28/2022]
Abstract
Objective Glioblastoma multiforme (GBM) is a grade IV brain tumour with very low life expectancy. Physicians and oncologists urgently require automated techniques in clinics for brain tumour segmentation (BTS) and survival prediction (SP) of GBM patients to perform precise surgery followed by chemotherapy treatment. Methods This study aims at examining the recent methodologies developed using automated learning and radiomics to automate the process of SP. Automated techniques use pre-operative raw magnetic resonance imaging (MRI) scans and clinical data related to GBM patients. All SP methods submitted for the multimodal brain tumour segmentation (BraTS) challenge are examined to extract the generic workflow for SP. Results The maximum accuracies achieved by 21 state-of-the-art different SP techniques reviewed in this study are 65.5 and 61.7% using the validation and testing subsets of the BraTS dataset, respectively. The comparisons based on segmentation architectures, SP models, training parameters and hardware configurations have been made. Conclusion The limited accuracies achieved in the literature led us to review the various automated methodologies and evaluation metrics to find out the research gaps and other findings related to the survival prognosis of GBM patients so that these accuracies can be improved in future. Finally, the paper provides the most promising future research directions to improve the performance of automated SP techniques and increase their clinical relevance.
Collapse
Affiliation(s)
- Gurinderjeet Kaur
- Computer Science and Engineering Department, Thapar Institute of Engineering and Technology, Patiala, Punjab India
| | - Prashant Singh Rana
- Computer Science and Engineering Department, Thapar Institute of Engineering and Technology, Patiala, Punjab India
| | - Vinay Arora
- Computer Science and Engineering Department, Thapar Institute of Engineering and Technology, Patiala, Punjab India
| |
Collapse
|
16
|
Xu W, Yang H, Zhang M, Cao Z, Pan X, Liu W. Brain tumor segmentation with corner attention and high-dimensional perceptual loss. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103438] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
17
|
Li D, Peng Y, Guo Y, Sun J. TAUNet: a triple-attention-based multi-modality MRI fusion U-Net for cardiac pathology segmentation. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00660-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractAutomated segmentation of cardiac pathology in MRI plays a significant role for diagnosis and treatment of some cardiac disease. In clinical practice, multi-modality MRI is widely used to improve the cardiac pathology segmentation, because it can provide multiple or complementary information. Recently, deep learning methods have presented implausible performance in multi-modality medical image segmentation. However, how to fuse the underlying multi-modality information effectively to segment the pathology with irregular shapes and small region at random locations, is still a challenge task. In this paper, a triple-attention-based multi-modality MRI fusion U-Net was proposed to learn complex relationship between different modalities and pay more attention on shape information, thus to achieve improved pathology segmentation. First, three independent encoders and one fusion encoder were applied to extract specific and multiple modality features. Secondly, we concatenate the modality feature maps and use the channel attention to fuse specific modal information at every stage of the three dedicate independent encoders, then the three single modality feature maps and channel attention feature maps are together concatenated to the decoder path. Spatial attention was adopted in decoder path to capture the correlation of various positions. Once more, we employ shape attention to focus shape-dependent information. Lastly, the training approach is made efficient by introducing deep supervision mechanism with object contextual representations block to ensure precisely boundary prediction. Our proposed network was evaluated on the public MICCAI 2020 Myocardial pathology segmentation dataset which involves patients suffering from myocardial infarction. Experiments on the dataset with three modalities demonstrate the effectiveness of fusion mode of our model, and attention mechanism can integrate various modality information well. We demonstrated that such a deep learning approach could better fuse complementary information to improve the segmentation performance of cardiac pathology.
Collapse
|
18
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|
19
|
Nalepa J. AIM and Brain Tumors. Artif Intell Med 2022. [DOI: 10.1007/978-3-030-64573-1_284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
20
|
Kaur A, Kaur L, Singh A. GA-UNet: UNet-based framework for segmentation of 2D and 3D medical images applicable on heterogeneous datasets. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06134-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
21
|
Beliveau V, Nørgaard M, Birkl C, Seppi K, Scherfler C. Automated segmentation of deep brain nuclei using convolutional neural networks and susceptibility weighted imaging. Hum Brain Mapp 2021; 42:4809-4822. [PMID: 34322940 PMCID: PMC8449109 DOI: 10.1002/hbm.25604] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 07/08/2021] [Accepted: 07/13/2021] [Indexed: 01/10/2023] Open
Abstract
The advent of susceptibility-sensitive MRI techniques, such as susceptibility weighted imaging (SWI), has enabled accurate in vivo visualization and quantification of iron deposition within the human brain. Although previous approaches have been introduced to segment iron-rich brain regions, such as the substantia nigra, subthalamic nucleus, red nucleus, and dentate nucleus, these methods are largely unavailable and manual annotation remains the most used approach to label these regions. Furthermore, given their recent success in outperforming other segmentation approaches, convolutional neural networks (CNN) promise better performances. The aim of this study was thus to evaluate state-of-the-art CNN architectures for the labeling of deep brain nuclei from SW images. We implemented five CNN architectures and considered ensembles of these models. Furthermore, a multi-atlas segmentation model was included to provide a comparison not based on CNN. We evaluated two prediction strategies: individual prediction, where a model is trained independently for each region, and combined prediction, which simultaneously predicts multiple closely located regions. In the training dataset, all models performed with high accuracy with Dice coefficients ranging from 0.80 to 0.95. The regional SWI intensities and volumes from the models' labels were strongly correlated with those obtained from manual labels. Performances were reduced on the external dataset, but were higher or comparable to the intrarater reliability and most models achieved significantly better results compared to multi-atlas segmentation. CNNs can accurately capture the individual variability of deep brain nuclei and represent a highly useful tool for their segmentation from SW images.
Collapse
Affiliation(s)
- Vincent Beliveau
- Department of NeurologyMedical University of InnsbruckInnsbruckAustria
- Neuroimaging Research Core FacilityMedical University of InnsbruckInnsbruckAustria
| | - Martin Nørgaard
- Neurobiology Research Unit & CIMBICopenhagen University HospitalCopenhagenDenmark
- Center for Reproducible Neuroscience, Department of PsychologyStanford UniversityStanfordCaliforniaUSA
| | - Christoph Birkl
- Department of NeuroradiologyMedical University of InnsbruckInnsbruckAustria
| | - Klaus Seppi
- Department of NeurologyMedical University of InnsbruckInnsbruckAustria
- Neuroimaging Research Core FacilityMedical University of InnsbruckInnsbruckAustria
| | - Christoph Scherfler
- Department of NeurologyMedical University of InnsbruckInnsbruckAustria
- Neuroimaging Research Core FacilityMedical University of InnsbruckInnsbruckAustria
| |
Collapse
|
22
|
HybridCTrm: Bridging CNN and Transformer for Multimodal Brain Image Segmentation. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:7467261. [PMID: 34630994 PMCID: PMC8500745 DOI: 10.1155/2021/7467261] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 09/03/2021] [Accepted: 09/16/2021] [Indexed: 11/17/2022]
Abstract
Multimodal medical image segmentation is always a critical problem in medical image segmentation. Traditional deep learning methods utilize fully CNNs for encoding given images, thus leading to deficiency of long-range dependencies and bad generalization performance. Recently, a sequence of Transformer-based methodologies emerges in the field of image processing, which brings great generalization and performance in various tasks. On the other hand, traditional CNNs have their own advantages, such as rapid convergence and local representations. Therefore, we analyze a hybrid multimodal segmentation method based on Transformers and CNNs and propose a novel architecture, HybridCTrm network. We conduct experiments using HybridCTrm on two benchmark datasets and compare with HyperDenseNet, a network based on fully CNNs. Results show that our HybridCTrm outperforms HyperDenseNet on most of the evaluation metrics. Furthermore, we analyze the influence of the depth of Transformer on the performance. Besides, we visualize the results and carefully explore how our hybrid methods improve on segmentations.
Collapse
|
23
|
Bortsova G, Bos D, Dubost F, Vernooij MW, Ikram MK, van Tulder G, de Bruijne M. Automated Segmentation and Volume Measurement of Intracranial Internal Carotid Artery Calcification at Noncontrast CT. Radiol Artif Intell 2021; 3:e200226. [PMID: 34617024 DOI: 10.1148/ryai.2021200226] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Revised: 05/31/2021] [Accepted: 06/07/2020] [Indexed: 01/22/2023]
Abstract
Purpose To develop and evaluate a fully-automated deep learning-based method for assessment of intracranial internal carotid artery calcification (ICAC). Materials and Methods This was a secondary analysis of prospectively collected data from the Rotterdam study (2003-2006) to develop and validate a deep learning-based method for automated ICAC delineation and volume measurement. Two observers manually delineated ICAC on noncontrast CT scans of 2319 participants (mean age, 69 years ± 7 [standard deviation]; 1154 women [53.2%]), and a deep learning model was trained to segment ICAC and quantify its volume. Model performance was assessed by comparing manual and automated segmentations and volume measurements to those produced by an independent observer (available on 47 scans), comparing the segmentation accuracy in a blinded qualitative visual comparison by an expert observer, and comparing the association with first stroke incidence from the scan date until 2016. All method performance metrics were computed using 10-fold cross-validation. Results The automated delineation of ICAC reached a sensitivity of 83.8% and positive predictive value (PPV) of 88%. The intraclass correlation between automatic and manual ICAC volume measures was 0.98 (95% CI: 0.97, 0.98; computed in the entire dataset). Measured between the assessments of independent observers, sensitivity was 73.9%, PPV was 89.5%, and intraclass correlation coefficient was 0.91 (95% CI: 0.84, 0.95; computed in the 47-scan subset). In the blinded visual comparisons of 294 regions, automated delineations were judged as more accurate than manual delineations in 131 regions, less accurate in 94 regions, and equally accurate in the rest of the regions (131 of 225, 58.2%; P = .01). The association of ICAC volume with incident stroke was similarly strong for both automated (hazard ratio, 1.38 [95% CI: 1.12, 1.75]) and manually measured volumes (hazard ratio, 1.48 [95% CI: 1.20, 1.87]). Conclusion The developed model was capable of automated segmentation and volume quantification of ICAC with accuracy comparable to human experts.Keywords CT, Neural Networks, Carotid Arteries, Calcifications/Calculi, Arteriosclerosis, Segmentation, Vision Application Domain, Stroke Supplemental material is available for this article. © RSNA, 2021.
Collapse
Affiliation(s)
- Gerda Bortsova
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine (G.B., M.d.B.), Department of Epidemiology (D.B., M.W.V., M.K.I.), and Department of Radiology and Nuclear Medicine (M.W.V.), Erasmus MC, PO Box 2040, 3000 CA Rotterdam, the Netherlands; Department of Biomedical Data Science, Stanford University, Stanford, Calif (F.D.); Faculty of Science, Radboud University, Nijmegen, the Netherlands (G.v.T.); and Machine Learning Section, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark (M.d.B.)
| | - Daniel Bos
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine (G.B., M.d.B.), Department of Epidemiology (D.B., M.W.V., M.K.I.), and Department of Radiology and Nuclear Medicine (M.W.V.), Erasmus MC, PO Box 2040, 3000 CA Rotterdam, the Netherlands; Department of Biomedical Data Science, Stanford University, Stanford, Calif (F.D.); Faculty of Science, Radboud University, Nijmegen, the Netherlands (G.v.T.); and Machine Learning Section, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark (M.d.B.)
| | - Florian Dubost
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine (G.B., M.d.B.), Department of Epidemiology (D.B., M.W.V., M.K.I.), and Department of Radiology and Nuclear Medicine (M.W.V.), Erasmus MC, PO Box 2040, 3000 CA Rotterdam, the Netherlands; Department of Biomedical Data Science, Stanford University, Stanford, Calif (F.D.); Faculty of Science, Radboud University, Nijmegen, the Netherlands (G.v.T.); and Machine Learning Section, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark (M.d.B.)
| | - Meike W Vernooij
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine (G.B., M.d.B.), Department of Epidemiology (D.B., M.W.V., M.K.I.), and Department of Radiology and Nuclear Medicine (M.W.V.), Erasmus MC, PO Box 2040, 3000 CA Rotterdam, the Netherlands; Department of Biomedical Data Science, Stanford University, Stanford, Calif (F.D.); Faculty of Science, Radboud University, Nijmegen, the Netherlands (G.v.T.); and Machine Learning Section, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark (M.d.B.)
| | - M Kamran Ikram
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine (G.B., M.d.B.), Department of Epidemiology (D.B., M.W.V., M.K.I.), and Department of Radiology and Nuclear Medicine (M.W.V.), Erasmus MC, PO Box 2040, 3000 CA Rotterdam, the Netherlands; Department of Biomedical Data Science, Stanford University, Stanford, Calif (F.D.); Faculty of Science, Radboud University, Nijmegen, the Netherlands (G.v.T.); and Machine Learning Section, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark (M.d.B.)
| | - Gijs van Tulder
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine (G.B., M.d.B.), Department of Epidemiology (D.B., M.W.V., M.K.I.), and Department of Radiology and Nuclear Medicine (M.W.V.), Erasmus MC, PO Box 2040, 3000 CA Rotterdam, the Netherlands; Department of Biomedical Data Science, Stanford University, Stanford, Calif (F.D.); Faculty of Science, Radboud University, Nijmegen, the Netherlands (G.v.T.); and Machine Learning Section, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark (M.d.B.)
| | - Marleen de Bruijne
- Biomedical Imaging Group Rotterdam, Department of Radiology and Nuclear Medicine (G.B., M.d.B.), Department of Epidemiology (D.B., M.W.V., M.K.I.), and Department of Radiology and Nuclear Medicine (M.W.V.), Erasmus MC, PO Box 2040, 3000 CA Rotterdam, the Netherlands; Department of Biomedical Data Science, Stanford University, Stanford, Calif (F.D.); Faculty of Science, Radboud University, Nijmegen, the Netherlands (G.v.T.); and Machine Learning Section, Department of Computer Science, University of Copenhagen, Copenhagen, Denmark (M.d.B.)
| |
Collapse
|
24
|
Rosas-Gonzalez S, Birgui-Sekou T, Hidane M, Zemmoura I, Tauber C. Asymmetric Ensemble of Asymmetric U-Net Models for Brain Tumor Segmentation With Uncertainty Estimation. Front Neurol 2021; 12:609646. [PMID: 34659077 PMCID: PMC8515181 DOI: 10.3389/fneur.2021.609646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Accepted: 07/22/2021] [Indexed: 11/29/2022] Open
Abstract
Accurate brain tumor segmentation is crucial for clinical assessment, follow-up, and subsequent treatment of gliomas. While convolutional neural networks (CNN) have become state of the art in this task, most proposed models either use 2D architectures ignoring 3D contextual information or 3D models requiring large memory capacity and extensive learning databases. In this study, an ensemble of two kinds of U-Net-like models based on both 3D and 2.5D convolutions is proposed to segment multimodal magnetic resonance images (MRI). The 3D model uses concatenated data in a modified U-Net architecture. In contrast, the 2.5D model is based on a multi-input strategy to extract low-level features from each modality independently and on a new 2.5D Multi-View Inception block that aims to merge features from different views of a 3D image aggregating multi-scale features. The Asymmetric Ensemble of Asymmetric U-Net (AE AU-Net) based on both is designed to find a balance between increasing multi-scale and 3D contextual information extraction and keeping memory consumption low. Experiments on 2019 dataset show that our model improves enhancing tumor sub-region segmentation. Overall, performance is comparable with state-of-the-art results, although with less learning data or memory requirements. In addition, we provide voxel-wise and structure-wise uncertainties of the segmentation results, and we have established qualitative and quantitative relationships between uncertainty and prediction errors. Dice similarity coefficient for the whole tumor, tumor core, and tumor enhancing regions on BraTS 2019 validation dataset were 0.902, 0.815, and 0.773. We also applied our method in BraTS 2018 with corresponding Dice score values of 0.908, 0.838, and 0.800.
Collapse
Affiliation(s)
| | | | - Moncef Hidane
- LIFAT EA 6300, INSA Centre Val de Loire, Université de Tours, Tours, France
| | - Ilyess Zemmoura
- UMR Inserm U1253, iBrain, Université de Tours, Inserm, Tours, France
| | - Clovis Tauber
- UMR Inserm U1253, iBrain, Université de Tours, Inserm, Tours, France
| |
Collapse
|
25
|
Lin WW, Juang C, Yueh MH, Huang TM, Li T, Wang S, Yau ST. 3D brain tumor segmentation using a two-stage optimal mass transport algorithm. Sci Rep 2021; 11:14686. [PMID: 34376714 PMCID: PMC8355223 DOI: 10.1038/s41598-021-94071-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Accepted: 06/30/2021] [Indexed: 11/29/2022] Open
Abstract
Optimal mass transport (OMT) theory, the goal of which is to move any irregular 3D object (i.e., the brain) without causing significant distortion, is used to preprocess brain tumor datasets for the first time in this paper. The first stage of a two-stage OMT (TSOMT) procedure transforms the brain into a unit solid ball. The second stage transforms the unit ball into a cube, as it is easier to apply a 3D convolutional neural network to rectangular coordinates. Small variations in the local mass-measure stretch ratio among all the brain tumor datasets confirm the robustness of the transform. Additionally, the distortion is kept at a minimum with a reasonable transport cost. The original [Formula: see text] dataset is thus reduced to a cube of [Formula: see text], which is a 76.6% reduction in the total number of voxels, without losing much detail. Three typical U-Nets are trained separately to predict the whole tumor (WT), tumor core (TC), and enhanced tumor (ET) from the cube. An impressive training accuracy of 0.9822 in the WT cube is achieved at 400 epochs. An inverse TSOMT method is applied to the predicted cube to obtain the brain results. The conversion loss from the TSOMT method to the inverse TSOMT method is found to be less than one percent. For training, good Dice scores (0.9781 for the WT, 0.9637 for the TC, and 0.9305 for the ET) can be obtained. Significant improvements in brain tumor detection and the segmentation accuracy are achieved. For testing, postprocessing (rotation) is added to the TSOMT, U-Net prediction, and inverse TSOMT methods for an accuracy improvement of one to two percent. It takes 200 seconds to complete the whole segmentation process on each new brain tumor dataset.
Collapse
Affiliation(s)
- Wen-Wei Lin
- Department of Applied Mathematics, National Yang Ming Chiao Tung University, Hsinchu, 300, Taiwan
| | - Cheng Juang
- Electronics Department, Ming Hsin University of Science and Technology, Hsinchu, 304, Taiwan
| | - Mei-Heng Yueh
- Department of Mathematics, National Taiwan Normal University, Taipei, 116, Taiwan
| | - Tsung-Ming Huang
- Department of Mathematics, National Taiwan Normal University, Taipei, 116, Taiwan.
| | - Tiexiang Li
- School of Mathematics, Southeast University, Nanjing, 211189, People's Republic of China
- Nanjing Center for Applied Mathematics, Nanjing, 211135, People's Republic of China
| | - Sheng Wang
- Department of Applied Mathematics, National Yang Ming Chiao Tung University, Hsinchu, 300, Taiwan
| | - Shing-Tung Yau
- Department of Mathematics, Harvard University, Cambridge, USA
| |
Collapse
|
26
|
Lin M, Wynne JF, Zhou B, Wang T, Lei Y, Curran WJ, Liu T, Yang X. Artificial intelligence in tumor subregion analysis based on medical imaging: A review. J Appl Clin Med Phys 2021; 22:10-26. [PMID: 34164913 PMCID: PMC8292694 DOI: 10.1002/acm2.13321] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 04/17/2021] [Accepted: 05/22/2021] [Indexed: 12/20/2022] Open
Abstract
Medical imaging is widely used in the diagnosis and treatment of cancer, and artificial intelligence (AI) has achieved tremendous success in medical image analysis. This paper reviews AI-based tumor subregion analysis in medical imaging. We summarize the latest AI-based methods for tumor subregion analysis and their applications. Specifically, we categorize the AI-based methods by training strategy: supervised and unsupervised. A detailed review of each category is presented, highlighting important contributions and achievements. Specific challenges and potential applications of AI in tumor subregion analysis are discussed.
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Jacob F. Wynne
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Boran Zhou
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Walter J. Curran
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer InstituteEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
27
|
Lin M, Momin S, Lei Y, Wang H, Curran WJ, Liu T, Yang X. Fully automated segmentation of brain tumor from multiparametric MRI using 3D context deep supervised U-Net. Med Phys 2021; 48:4365-4374. [PMID: 34101845 DOI: 10.1002/mp.15032] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2020] [Revised: 05/14/2021] [Accepted: 05/31/2021] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Owing to histologic complexities of brain tumors, its diagnosis requires the use of multimodalities to obtain valuable structural information so that brain tumor subregions can be properly delineated. In current clinical workflow, physicians typically perform slice-by-slice delineation of brain tumor subregions, which is a time-consuming process and also more susceptible to intra- and inter-rater variabilities possibly leading to misclassification. To deal with this issue, this study aims to develop an automatic segmentation of brain tumor in MR images using deep learning. METHOD In this study, we develop a context deep-supervised U-Net to segment brain tumor subregions. A context block which aggregates multiscale contextual information for dense segmentation was proposed. This approach enlarges the effective receptive field of convolutional neural networks, which, in turn, improves the segmentation accuracy of brain tumor subregions. We performed the fivefold cross-validation on the Brain Tumor Segmentation Challenge (BraTS) 2020 training dataset. The BraTS 2020 testing datasets were obtained via BraTS online website as a hold-out test. For BraTS, the evaluation system divides the tumor into three regions: whole tumor (WT), tumor core (TC), and enhancing tumor (ET). The performance of our proposed method was compared against two state-of-the-arts CNN networks in terms of segmentation accuracy via Dice similarity coefficient (DSC) and Hausdorff distance (HD). The tumor volumes generated by our proposed method were compared with manually contoured volumes via Bland-Altman plots and Pearson analysis. RESULTS The proposed method achieved the segmentation results with a DSC of 0.923 ± 0.047, 0.893 ± 0.176, and 0.846 ± 0.165 and a 95% HD95 of 3.946 ± 7.041, 3.981 ± 6.670, and 10.128 ± 51.136 mm on WT, TC, and ET, respectively. Experimental results demonstrate that our method achieved comparable to significantly (p < 0.05) better segmentation accuracies than other two state-of-the-arts CNN networks. Pearson correlation analysis showed a high positive correlation between the tumor volumes generated by proposed method and manual contour. CONCLUSION Overall qualitative and quantitative results of this work demonstrate the potential of translating proposed technique into clinical practice for segmenting brain tumor subregions, and further facilitate brain tumor radiotherapy workflow.
Collapse
Affiliation(s)
- Mingquan Lin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Shadab Momin
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Hesheng Wang
- Department of Radiation Oncology, NYU Grossman School of Medicine, New York, NY, USA
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, USA
| |
Collapse
|
28
|
Iriondo C, Liu F, Calivà F, Kamat S, Majumdar S, Pedoia V. Towards understanding mechanistic subgroups of osteoarthritis: 8-year cartilage thickness trajectory analysis. J Orthop Res 2021; 39:1305-1317. [PMID: 32897602 DOI: 10.1002/jor.24849] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 07/23/2020] [Accepted: 09/02/2020] [Indexed: 02/04/2023]
Abstract
Many studies have validated cartilage thickness as a biomarker for knee osteoarthritis (OA); however, few studies investigate beyond cross-sectional observations or comparisons across two timepoints. By characterizing the trajectory of cartilage thickness changes over 8 years in healthy individuals from the OA initiative data set, this study discovers associations between the dynamics of cartilage changes and OA incidence. A fully automated cartilage segmentation and thickness measurement method were developed and validated against manual measurements: mean absolute error = 0.11-0.14 mm (n = 4129 knees) and automatic reproducibility = 0.04-0.07 mm (n = 316 knees). The mean thickness for the medial and lateral tibia (MT, LT), central weight-bearing medial and lateral femur (cMF, cLF), and patella (P) cartilage compartments were quantified for 1453 knees at seven timepoints. Trajectory subgroups were defined per cartilage compartment such as stable, thinning to thickening, accelerated thickening, plateaued thickening, thickening to thinning, accelerated thinning, or plateaued thinning. For tibiofemoral compartments, the stable (22%-36%) and plateaued thinning (22%-37%) trajectories were the most common, with an average initial velocity (μm/month), acceleration (μm/month2 ) for the plateaued thinning trajectories LT: -2.66, 0.0326; MT: -2.49, 0.0365; cMF: -3.51, 0.0509; and cLF: -2.68, 0.041. In the patella compartment, the plateaued thinning (35%) and thickening to thinning (24%) trajectories were the most common, with an average initial velocity, acceleration for each -4.17, 0.0424; 1.95, -0.0835. Knees with nonstable trajectories had higher adjusted odds of OA incidence than stable trajectories: accelerated thickening, accelerated thinning, and plateaued thinning trajectories of the MT had adjusted odds ratio (OR) of 18.9, 5.48, and 1.47, respectively; in the cMF, adjusted OR of 8.55, 10.1, and 2.61, respectively.
Collapse
Affiliation(s)
- Claudia Iriondo
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA.,Department of Bioengineering, University of California, San Francisco and University of California, Berkeley Joint Graduate Group in Bioengineering, San Francisco, California, USA
| | - Felix Liu
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Francesco Calivà
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Sarthak Kamat
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Sharmila Majumdar
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, San Francisco, California, USA
| |
Collapse
|
29
|
Fu J, Singhrao K, Qi XS, Yang Y, Ruan D, Lewis JH. Three-dimensional multipath DenseNet for improving automatic segmentation of glioblastoma on pre-operative multimodal MR images. Med Phys 2021; 48:2859-2866. [PMID: 33621350 DOI: 10.1002/mp.14800] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 01/08/2021] [Accepted: 02/18/2021] [Indexed: 11/11/2022] Open
Abstract
PURPOSE Convolutional neural networks have achieved excellent results in automatic medical image segmentation. In this study, we proposed a novel three-dimensional (3D) multipath DenseNet for generating the accurate glioblastoma (GBM) tumor contour from four multimodal pre-operative MR images. We hypothesized that the multipath architecture could achieve more accurate segmentation than a singlepath architecture. METHODS Two hundred and fifty-eight GBM patients were included in this study. Each patient had four MR images (T1-weighted, contrast-enhanced T1-weighted, T2-weighted, and FLAIR) and the manually segmented tumor contour. We built a 3D multipath DenseNet that could be trained to achieve an end-to-end mapping from four MR images to the corresponding GBM tumor contour. A 3D singlepath DenseNet was also built for comparison. Both DenseNets were based on the encoder-decoder architecture. All four images were concatenated and fed into a single encoder path in the singlepath DenseNet, while each input image had its own encoder path in the multipath DenseNet. The patient cohort was randomly split into a training set of 180 patients, a validation set of 39 patients, and a testing set of 39 patients. Model performance was evaluated using the Dice similarity coefficient (DSC), average surface distance (ASD), and 95% Hausdorff distance (HD95% ). Wilcoxon signed-rank tests were conducted to assess statistical significances. RESULTS The singlepath DenseNet achieved the DSC of 0.911 ± 0.060, ASD of 1.3 ± 0.7 mm, and HD95% of 5.2 ± 7.1 mm, while the multipath DenseNet achieved the DSC of 0.922 ± 0.041, ASD of 1.1 ± 0.5 mm, and HD95% of 3.9 ± 3.3 mm. The P-values of all Wilcoxon signed-rank tests were less than 0.05. CONCLUSIONS Both DenseNets generated GBM tumor contours in good agreement with the manually segmented contours from multimodal MR images. The multipath DenseNet achieved more accurate tumor segmentation than the singlepath DenseNet. Here presented the 3D multipath DenseNet that demonstrated an improved accuracy over comparable algorithms in the clinical task of GBM tumor segmentation.
Collapse
Affiliation(s)
- Jie Fu
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Kamal Singhrao
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - X Sharon Qi
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Yingli Yang
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - Dan Ruan
- Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA, 90095, USA
| | - John H Lewis
- Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, CA, 90048, USA
| |
Collapse
|
30
|
Pennig L, Hoyer UCI, Krauskopf A, Shahzad R, Jünger ST, Thiele F, Laukamp KR, Grunz JP, Perkuhn M, Schlamann M, Kabbasch C, Borggrefe J, Goertz L. Deep learning assistance increases the detection sensitivity of radiologists for secondary intracranial aneurysms in subarachnoid hemorrhage. Neuroradiology 2021; 63:1985-1994. [PMID: 33837806 PMCID: PMC8589782 DOI: 10.1007/s00234-021-02697-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 03/21/2021] [Indexed: 12/03/2022]
Abstract
Purpose To evaluate whether a deep learning model (DLM) could increase the detection sensitivity of radiologists for intracranial aneurysms on CT angiography (CTA) in aneurysmal subarachnoid hemorrhage (aSAH). Methods Three different DLMs were trained on CTA datasets of 68 aSAH patients with 79 aneurysms with their outputs being combined applying ensemble learning (DLM-Ens). The DLM-Ens was evaluated on an independent test set of 104 aSAH patients with 126 aneuryms (mean volume 129.2 ± 185.4 mm3, 13.0% at the posterior circulation), which were determined by two radiologists and one neurosurgeon in consensus using CTA and digital subtraction angiography scans. CTA scans of the test set were then presented to three blinded radiologists (reader 1: 13, reader 2: 4, and reader 3: 3 years of experience in diagnostic neuroradiology), who assessed them individually for aneurysms. Detection sensitivities for aneurysms of the readers with and without the assistance of the DLM were compared. Results In the test set, the detection sensitivity of the DLM-Ens (85.7%) was comparable to the radiologists (reader 1: 91.2%, reader 2: 86.5%, and reader 3: 86.5%; Fleiss κ of 0.502). DLM-assistance significantly increased the detection sensitivity (reader 1: 97.6%, reader 2: 97.6%,and reader 3: 96.0%; overall P=.024; Fleiss κ of 0.878), especially for secondary aneurysms (88.2% of the additional aneurysms provided by the DLM). Conclusion Deep learning significantly improved the detection sensitivity of radiologists for aneurysms in aSAH, especially for secondary aneurysms. It therefore represents a valuable adjunct for physicians to establish an accurate diagnosis in order to optimize patient treatment.
Collapse
Affiliation(s)
- Lenhard Pennig
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.
| | - Ulrike Cornelia Isabel Hoyer
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Alexandra Krauskopf
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Department of Diagnostic and Interventional Radiology, University Hospital Düsseldorf, Düsseldorf, Germany
| | - Rahil Shahzad
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Innovative Technologies, Philips Healthcare, Aachen, Germany
| | - Stephanie T Jünger
- Center for Neurosurgery, Department of General Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany
| | - Frank Thiele
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Innovative Technologies, Philips Healthcare, Aachen, Germany
| | - Kai Roman Laukamp
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Jan-Peter Grunz
- Department of Diagnostic and Interventional Radiology, University Hospital Würzburg, Würzburg, Germany
| | - Michael Perkuhn
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Innovative Technologies, Philips Healthcare, Aachen, Germany
| | - Marc Schlamann
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Christoph Kabbasch
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany
| | - Jan Borggrefe
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Department of Radiology, Neuroradiology and Nuclear Medicine, Johannes Wesling University Hospital, Ruhr University Bochum, Bochum, Germany
| | - Lukas Goertz
- Institute for Diagnostic and Interventional Radiology, Faculty of Medicine and University Hospital Cologne, University of Cologne, Kerpener Straße 62, 50937, Cologne, Germany.,Center for Neurosurgery, Department of General Neurosurgery, University of Cologne, Faculty of Medicine and University Hospital, University of Cologne, Cologne, Germany
| |
Collapse
|
31
|
Luo Z, Jia Z, Yuan Z, Peng J. HDC-Net: Hierarchical Decoupled Convolution Network for Brain Tumor Segmentation. IEEE J Biomed Health Inform 2021; 25:737-745. [PMID: 32750914 DOI: 10.1109/jbhi.2020.2998146] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Accurate segmentation of brain tumor from magnetic resonance images (MRIs) is crucial for clinical treatment decision and surgical planning. Due to the large diversity of the tumors and complex boundary interactions between sub-regions, it is of a great challenge. Besides accuracy, resource constraint is another important consideration. Recently, impressive improvement has been achieved for this task by using deep convolutional networks. However, most of state-of-the-art models rely on expensive 3D convolutions as well as model cascade/ensemble strategies, which result in high computational overheads and undesired system complexity. For clinical usage, the challenge is how to pursue the best accuracy within very limited computational budgets. In this study, we segment 3D volumetric image in one-pass with a hierarchical decoupled convolution network (HDC-Net), which is a light-weight but efficient pseudo-3D model. Specifically, we replace 3D convolutions with a novel hierarchical decoupled convolution (HDC) module, which can explore multi-scale multi-view spatial contexts with high efficiency. Extensive experiments on the BraTS 2018 and 2017 challenge datasets show that our method performs favorably against state of the art in accuracy yet with greatly reduced computational complexity.
Collapse
|
32
|
Zhu C, Wang X, Teng Z, Chen S, Huang X, Xia M, Mao L, Bai C. Cascaded residual U-net for fully automatic segmentation of 3D carotid artery in high-resolution multi-contrast MR images. Phys Med Biol 2021; 66:045033. [PMID: 33333499 DOI: 10.1088/1361-6560/abd4bb] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Accurate and automatic carotid artery segmentation for magnetic resonance (MR) images is eagerly expected, which can greatly assist a comprehensive study of atherosclerosis and accelerate the translation. Although many efforts have been made, identification of the inner lumen and outer wall in diseased vessels is still a challenging task due to complex vascular deformation, blurred wall boundary, and confusing componential expression. In this paper, we introduce a novel fully automatic 3D framework for simultaneously segmenting the carotid artery from high-resolution multi-contrast MR sequences based on deep learning. First, an optimal channel fitting structure is designed for identity mapping, and a novel 3D residual U-net is used as a basic network. Second, high-resolution MR images are trained using both patch-level and global-level strategies, and the two pre-segmentation results are optimized based on structural characteristics. Third, the optimized pre-segmentation results are cascaded with the patch-cropped MR volume data and trained to segment the carotid lumen and wall. Extensive experiments demonstrate the proposed method outperforms the state-of-the-art 3D Unet-based segmentation models.
Collapse
Affiliation(s)
- Chenglu Zhu
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang 310023 People's Republic of China
| | - Xiaoyan Wang
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang 310023 People's Republic of China
| | - Zhongzhao Teng
- University Department of Radiology, University of Cambridge, Cambridge, CB2 0QQ, United Kingdom
| | - Shengyong Chen
- Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, People's Republic of China
| | - Xiaojie Huang
- The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou 310009, People's Republic of China
| | - Ming Xia
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang 310023 People's Republic of China
| | - Lizhao Mao
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang 310023 People's Republic of China
| | - Cong Bai
- College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang 310023 People's Republic of China
| |
Collapse
|
33
|
Umapathy L, Perez-Carrillo GG, Keerthivasan MB, Rosado-Toro JA, Altbach MI, Winegar B, Weinkauf C, Bilgin A. A Stacked Generalization of 3D Orthogonal Deep Learning Convolutional Neural Networks for Improved Detection of White Matter Hyperintensities in 3D FLAIR Images. AJNR Am J Neuroradiol 2021; 42:639-647. [PMID: 33574101 DOI: 10.3174/ajnr.a6970] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Accepted: 10/26/2020] [Indexed: 01/18/2023]
Abstract
BACKGROUND AND PURPOSE Accurate and reliable detection of white matter hyperintensities and their volume quantification can provide valuable clinical information to assess neurologic disease progression. In this work, a stacked generalization ensemble of orthogonal 3D convolutional neural networks, StackGen-Net, is explored for improving automated detection of white matter hyperintensities in 3D T2-FLAIR images. MATERIALS AND METHODS Individual convolutional neural networks in StackGen-Net were trained on 2.5D patches from orthogonal reformatting of 3D-FLAIR (n = 21) to yield white matter hyperintensity posteriors. A meta convolutional neural network was trained to learn the functional mapping from orthogonal white matter hyperintensity posteriors to the final white matter hyperintensity prediction. The impact of training data and architecture choices on white matter hyperintensity segmentation performance was systematically evaluated on a test cohort (n = 9). The segmentation performance of StackGen-Net was compared with state-of-the-art convolutional neural network techniques on an independent test cohort from the Alzheimer's Disease Neuroimaging Initiative-3 (n = 20). RESULTS StackGen-Net outperformed individual convolutional neural networks in the ensemble and their combination using averaging or majority voting. In a comparison with state-of-the-art white matter hyperintensity segmentation techniques, StackGen-Net achieved a significantly higher Dice score (0.76 [SD, 0.08], F1-lesion (0.74 [SD, 0.13]), and area under precision-recall curve (0.84 [SD, 0.09]), and the lowest absolute volume difference (13.3% [SD, 9.1%]). StackGen-Net performance in Dice scores (median = 0.74) did not significantly differ (P = .22) from interobserver (median = 0.73) variability between 2 experienced neuroradiologists. We found no significant difference (P = .15) in white matter hyperintensity lesion volumes from StackGen-Net predictions and ground truth annotations. CONCLUSIONS A stacked generalization of convolutional neural networks, utilizing multiplanar lesion information using 2.5D spatial context, greatly improved the segmentation performance of StackGen-Net compared with traditional ensemble techniques and some state-of-the-art deep learning models for 3D-FLAIR.
Collapse
Affiliation(s)
- L Umapathy
- From the Departments of Electrical and Computer Engineering (L.U., A.B.).,Medical Imaging (L.U., G.G.P.-C., M.B.K., J.A.R.-T., M.I.A., B.W., A.B.)
| | - G G Perez-Carrillo
- Medical Imaging (L.U., G.G.P.-C., M.B.K., J.A.R.-T., M.I.A., B.W., A.B.)
| | - M B Keerthivasan
- Medical Imaging (L.U., G.G.P.-C., M.B.K., J.A.R.-T., M.I.A., B.W., A.B.)
| | - J A Rosado-Toro
- Medical Imaging (L.U., G.G.P.-C., M.B.K., J.A.R.-T., M.I.A., B.W., A.B.)
| | - M I Altbach
- Medical Imaging (L.U., G.G.P.-C., M.B.K., J.A.R.-T., M.I.A., B.W., A.B.)
| | - B Winegar
- Medical Imaging (L.U., G.G.P.-C., M.B.K., J.A.R.-T., M.I.A., B.W., A.B.)
| | | | - A Bilgin
- From the Departments of Electrical and Computer Engineering (L.U., A.B.) .,Medical Imaging (L.U., G.G.P.-C., M.B.K., J.A.R.-T., M.I.A., B.W., A.B.).,Biomedical Engineering (A.B.), University of Arizona, Tucson, Arizona
| | | |
Collapse
|
34
|
Wang Y, Peng J, Jia Z. Brain tumor segmentation via C-dense convolutional neural network. PROGRESS IN ARTIFICIAL INTELLIGENCE 2021. [DOI: 10.1007/s13748-021-00232-8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
35
|
|
36
|
Nalepa J. AIM and Brain Tumors. Artif Intell Med 2021. [DOI: 10.1007/978-3-030-58080-3_284-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
37
|
Fully automated detection and segmentation of intracranial aneurysms in subarachnoid hemorrhage on CTA using deep learning. Sci Rep 2020; 10:21799. [PMID: 33311535 PMCID: PMC7733480 DOI: 10.1038/s41598-020-78384-1] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 11/23/2020] [Indexed: 02/07/2023] Open
Abstract
In aneurysmal subarachnoid hemorrhage (aSAH), accurate diagnosis of aneurysm is essential for subsequent treatment to prevent rebleeding. However, aneurysm detection proves to be challenging and time-consuming. The purpose of this study was to develop and evaluate a deep learning model (DLM) to automatically detect and segment aneurysms in patients with aSAH on computed tomography angiography. In this retrospective single-center study, three different DLMs were trained on 68 patients with 79 aneurysms treated for aSAH (2016–2017) using five-fold-cross-validation. Their outputs were combined to a single DLM via ensemble-learning. The DLM was evaluated on an independent test set consisting of 185 patients with 215 aneurysms (2010–2015). Independent manual segmentations of aneurysms in a 3D voxel-wise manner by two readers (neurosurgeon, radiologist) provided the reference standard. For aneurysms > 30 mm3 (mean diameter of ~ 4 mm) on the test set, the DLM provided a detection sensitivity of 87% with false positives (FPs)/scan of 0.42. Automatic segmentations achieved a median dice similarity coefficient (DSC) of 0.80 compared to the reference standard. Aneurysm location (anterior vs. posterior circulation; P = .07) and bleeding severity (Fisher grade ≤ 3 vs. 4; P = .33) did not impede detection sensitivity or segmentation performance. For aneurysms > 100 mm3 (mean diameter of ~ 6 mm), a sensitivity of 96% with DSC of 0.87 and FPs/scan of 0.14 were obtained. In the present study, we demonstrate that the proposed DLM detects and segments aneurysms > 30 mm3 in patients with aSAH with high sensitivity independent of cerebral circulation and bleeding severity while producing FP findings of less than one per scan. Hence, the DLM can potentially assist treating physicians in aSAH by providing automated detection and segmentations of aneurysms.
Collapse
|
38
|
Mitchell JR, Kamnitsas K, Singleton KW, Whitmire SA, Clark-Swanson KR, Ranjbar S, Rickertsen CR, Johnston SK, Egan KM, Rollison DE, Arrington J, Krecke KN, Passe TJ, Verdoorn JT, Nagelschneider AA, Carr CM, Port JD, Patton A, Campeau NG, Liebo GB, Eckel LJ, Wood CP, Hunt CH, Vibhute P, Nelson KD, Hoxworth JM, Patel AC, Chong BW, Ross JS, Boxerman JL, Vogelbaum MA, Hu LS, Glocker B, Swanson KR. Deep neural network to locate and segment brain tumors outperformed the expert technicians who created the training data. J Med Imaging (Bellingham) 2020; 7:055501. [PMID: 33102623 PMCID: PMC7567400 DOI: 10.1117/1.jmi.7.5.055501] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Accepted: 09/21/2020] [Indexed: 11/17/2022] Open
Abstract
Purpose: Deep learning (DL) algorithms have shown promising results for brain tumor segmentation in MRI. However, validation is required prior to routine clinical use. We report the first randomized and blinded comparison of DL and trained technician segmentations. Approach: We compiled a multi-institutional database of 741 pretreatment MRI exams. Each contained a postcontrast T1-weighted exam, a T2-weighted fluid-attenuated inversion recovery exam, and at least one technician-derived tumor segmentation. The database included 729 unique patients (470 males and 259 females). Of these exams, 641 were used for training the DL system, and 100 were reserved for testing. We developed a platform to enable qualitative, blinded, controlled assessment of lesion segmentations made by technicians and the DL method. On this platform, 20 neuroradiologists performed 400 side-by-side comparisons of segmentations on 100 test cases. They scored each segmentation between 0 (poor) and 10 (perfect). Agreement between segmentations from technicians and the DL method was also evaluated quantitatively using the Dice coefficient, which produces values between 0 (no overlap) and 1 (perfect overlap). Results: The neuroradiologists gave technician and DL segmentations mean scores of 6.97 and 7.31, respectively (p<0.00007). The DL method achieved a mean Dice coefficient of 0.87 on the test cases. Conclusions: This was the first objective comparison of automated and human segmentation using a blinded controlled assessment study. Our DL system learned to outperform its “human teachers” and produced output that was better, on average, than its training data.
Collapse
Affiliation(s)
- Joseph Ross Mitchell
- H. Lee Moffitt Cancer Center and Research Institute, Department of Machine Learning, Tampa, Florida, United States
| | | | - Kyle W Singleton
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | - Scott A Whitmire
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | | | - Sara Ranjbar
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | | | - Sandra K Johnston
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,University of Washington, Department of Radiology, Seattle, Washington, United States
| | - Kathleen M Egan
- H. Lee Moffitt Cancer Center and Research Institute, Department of Cancer Epidemiology, Tampa, Florida, United States
| | - Dana E Rollison
- H. Lee Moffitt Cancer Center and Research Institute, Department of Cancer Epidemiology, Tampa, Florida, United States
| | - John Arrington
- H. Lee Moffitt Cancer Center and Research Institute, Department of Diagnostic Imaging and Interventional Radiology, Tampa, Florida, United States
| | - Karl N Krecke
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Theodore J Passe
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jared T Verdoorn
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | | | - Carrie M Carr
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - John D Port
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Alice Patton
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Norbert G Campeau
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Greta B Liebo
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Laurence J Eckel
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Christopher P Wood
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Christopher H Hunt
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Prasanna Vibhute
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Kent D Nelson
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Joseph M Hoxworth
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Ameet C Patel
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Brian W Chong
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jeffrey S Ross
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jerrold L Boxerman
- Rhode Island Hospital and Alpert Medical School of Brown University, Department of Diagnostic Imaging, Providence, Rhode Island, United States
| | - Michael A Vogelbaum
- H. Lee Moffitt Cancer Center and Research Institute, Department of Neurosurgery, Tampa, Florida, United States
| | - Leland S Hu
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Ben Glocker
- Imperial College, Biomedical Image Analysis Group, London, United Kingdom
| | - Kristin R Swanson
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,Mayo Clinic, Department of Neurosurgery, Phoenix, Arizona, United States
| |
Collapse
|
39
|
Eijgelaar RS, Visser M, Müller DMJ, Barkhof F, Vrenken H, van Herk M, Bello L, Conti Nibali M, Rossi M, Sciortino T, Berger MS, Hervey-Jumper S, Kiesel B, Widhalm G, Furtner J, Robe PAJT, Mandonnet E, De Witt Hamer PC, de Munck JC, Witte MG. Robust Deep Learning-based Segmentation of Glioblastoma on Routine Clinical MRI Scans Using Sparsified Training. Radiol Artif Intell 2020; 2:e190103. [PMID: 33937837 PMCID: PMC8082349 DOI: 10.1148/ryai.2020190103] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 04/10/2020] [Accepted: 04/16/2020] [Indexed: 12/15/2022]
Abstract
PURPOSE To improve the robustness of deep learning-based glioblastoma segmentation in a clinical setting with sparsified datasets. MATERIALS AND METHODS In this retrospective study, preoperative T1-weighted, T2-weighted, T2-weighted fluid-attenuated inversion recovery, and postcontrast T1-weighted MRI from 117 patients (median age, 64 years; interquartile range [IQR], 55-73 years; 76 men) included within the Multimodal Brain Tumor Image Segmentation (BraTS) dataset plus a clinical dataset (2012-2013) with similar imaging modalities of 634 patients (median age, 59 years; IQR, 49-69 years; 382 men) with glioblastoma from six hospitals were used. Expert tumor delineations on the postcontrast images were available, but for various clinical datasets, one or more sequences were missing. The convolutional neural network, DeepMedic, was trained on combinations of complete and incomplete data with and without site-specific data. Sparsified training was introduced, which randomly simulated missing sequences during training. The effects of sparsified training and center-specific training were tested using Wilcoxon signed rank tests for paired measurements. RESULTS A model trained exclusively on BraTS data reached a median Dice score of 0.81 for segmentation on BraTS test data but only 0.49 on the clinical data. Sparsified training improved performance (adjusted P < .05), even when excluding test data with missing sequences, to median Dice score of 0.67. Inclusion of site-specific data during sparsified training led to higher model performance Dice scores greater than 0.8, on par with a model based on all complete and incomplete data. For the model using BraTS and clinical training data, inclusion of site-specific data or sparsified training was of no consequence. CONCLUSION Accurate and automatic segmentation of glioblastoma on clinical scans is feasible using a model based on large, heterogeneous, and partially incomplete datasets. Sparsified training may boost the performance of a smaller model based on public and site-specific data.Supplemental material is available for this article.Published under a CC BY 4.0 license.
Collapse
Affiliation(s)
| | | | - Domenique M. J. Müller
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Frederik Barkhof
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Hugo Vrenken
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Marcel van Herk
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Lorenzo Bello
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Marco Conti Nibali
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Marco Rossi
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Tommaso Sciortino
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Mitchel S. Berger
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Shawn Hervey-Jumper
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Barbara Kiesel
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Georg Widhalm
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Julia Furtner
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Pierre A. J. T. Robe
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Emmanuel Mandonnet
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Philip C. De Witt Hamer
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Jan C. de Munck
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| | - Marnix G. Witte
- From the Department of Radiation Oncology, The Netherlands Cancer Institute, Plesmanlaan 121, 1066 CX Amsterdam, the Netherlands (R.S.E., M.v.H., M.G.W.); Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (M.V., F.B., H.V., J.C.d.M.); Neurosurgical Center Amsterdam, Amsterdam UMC, Location Vrije Universiteit Amsterdam, Amsterdam, the Netherlands (D.M.J.M., P.C.D.W.H.); Institutes of Neurology & Healthcare Engineering, University College London, London, England (F.B.); Faculty of Biology, Medicine & Health, Division of Cancer Sciences, University of Manchester and Christie NHS Trust, Manchester, England (M.v.H.); Neurosurgical Oncology Unit, Department of Oncology and Hemato-Oncology, Università degli Studi di Milano, Humanitas Research Hospital, IRCCS, Milan, Italy (L.B., M.C.N., M.R., T.S.); Department of Neurologic Surgery, University of California–San Francisco, San Francisco, Calif (M.S.B., S.H.J.); Department of Neurosurgery, Medical University Vienna, Vienna, Austria (B.K., G.W.); Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Vienna, Austria (J.F.); Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands (P.A.J.T.R.); and Department of Neurologic Surgery, Hôpital Lariboisière, Paris, France (E.M.)
| |
Collapse
|
40
|
Sun M, Deng Y, Li M, Jiang H, Huang H, Liao W, Liu Y, Yang J, Li Y. Extraction and Analysis of Blue Steel Roofs Information Based on CNN Using Gaofen-2 Imageries. SENSORS 2020; 20:s20164655. [PMID: 32824822 PMCID: PMC7474430 DOI: 10.3390/s20164655] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 08/11/2020] [Accepted: 08/16/2020] [Indexed: 11/16/2022]
Abstract
Blue steel roof is advantageous for its low cost, durability, and ease of installation. It is generally used by industrial areas. The accurate and rapid mapping of blue steel roof is important for the preliminary assessment of inefficient industrial areas and is one of the key elements for quantifying environmental issues like urban heat islands. Here, the DeeplabV3+ semantic segmentation neural network based on GaoFen-2 images was used to analyze the quantity and spatial distribution of blue steel roofs in the Nanhai district, Foshan (including the towns of Shishan, Guicheng, Dali, and Lishui), which is the important manufacturing industry base of China. We found that: (1) the DeeplabV3+ performs well with an overall accuracy of 92%, higher than the maximum likelihood classification; (2) the distribution of blue steel roofs was not even across the whole study area, but they were evenly distributed within the town scale; and (3) strong positive correlation was observed between blue steel roofs area and industrial gross output. These results not only can be used to detect the inefficient industrial areas for regional planning but also provide fundamental data for studies of urban environmental issues.
Collapse
Affiliation(s)
- Meiwei Sun
- College of Geographical Science, Harbin Normal University, Harbin 150025, China; (M.S.); (W.L.)
- Key Laboratory of Remote Sensing and GIS Application in Guangdong Province, Public Laboratory of Geospatial Information Technology and Application in Guangdong Province, Guangzhou Institute of Geography, Guangzhou 510070, China; (Y.D.); (H.J.); (H.H.); (Y.L.); (J.Y.); (Y.L.)
| | - Yingbin Deng
- Key Laboratory of Remote Sensing and GIS Application in Guangdong Province, Public Laboratory of Geospatial Information Technology and Application in Guangdong Province, Guangzhou Institute of Geography, Guangzhou 510070, China; (Y.D.); (H.J.); (H.H.); (Y.L.); (J.Y.); (Y.L.)
- Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China
| | - Miao Li
- College of Geographical Science, Harbin Normal University, Harbin 150025, China; (M.S.); (W.L.)
- Correspondence:
| | - Hao Jiang
- Key Laboratory of Remote Sensing and GIS Application in Guangdong Province, Public Laboratory of Geospatial Information Technology and Application in Guangdong Province, Guangzhou Institute of Geography, Guangzhou 510070, China; (Y.D.); (H.J.); (H.H.); (Y.L.); (J.Y.); (Y.L.)
- Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China
| | - Haoling Huang
- Key Laboratory of Remote Sensing and GIS Application in Guangdong Province, Public Laboratory of Geospatial Information Technology and Application in Guangdong Province, Guangzhou Institute of Geography, Guangzhou 510070, China; (Y.D.); (H.J.); (H.H.); (Y.L.); (J.Y.); (Y.L.)
- Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China
| | - Wenyue Liao
- College of Geographical Science, Harbin Normal University, Harbin 150025, China; (M.S.); (W.L.)
- Key Laboratory of Remote Sensing and GIS Application in Guangdong Province, Public Laboratory of Geospatial Information Technology and Application in Guangdong Province, Guangzhou Institute of Geography, Guangzhou 510070, China; (Y.D.); (H.J.); (H.H.); (Y.L.); (J.Y.); (Y.L.)
| | - Yangxiaoyue Liu
- Key Laboratory of Remote Sensing and GIS Application in Guangdong Province, Public Laboratory of Geospatial Information Technology and Application in Guangdong Province, Guangzhou Institute of Geography, Guangzhou 510070, China; (Y.D.); (H.J.); (H.H.); (Y.L.); (J.Y.); (Y.L.)
- Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China
| | - Ji Yang
- Key Laboratory of Remote Sensing and GIS Application in Guangdong Province, Public Laboratory of Geospatial Information Technology and Application in Guangdong Province, Guangzhou Institute of Geography, Guangzhou 510070, China; (Y.D.); (H.J.); (H.H.); (Y.L.); (J.Y.); (Y.L.)
- Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China
| | - Yong Li
- Key Laboratory of Remote Sensing and GIS Application in Guangdong Province, Public Laboratory of Geospatial Information Technology and Application in Guangdong Province, Guangzhou Institute of Geography, Guangzhou 510070, China; (Y.D.); (H.J.); (H.H.); (Y.L.); (J.Y.); (Y.L.)
- Southern Marine Science and Engineering Guangdong Laboratory (Guangzhou), Guangzhou 511458, China
| |
Collapse
|
41
|
Kloenne M, Niehaus S, Lampe L, Merola A, Reinelt J, Roeder I, Scherf N. Domain-specific cues improve robustness of deep learning-based segmentation of CT volumes. Sci Rep 2020; 10:10712. [PMID: 32612129 PMCID: PMC7329868 DOI: 10.1038/s41598-020-67544-y] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2020] [Accepted: 06/04/2020] [Indexed: 11/08/2022] Open
Abstract
Machine learning has considerably improved medical image analysis in the past years. Although data-driven approaches are intrinsically adaptive and thus, generic, they often do not perform the same way on data from different imaging modalities. In particular computed tomography (CT) data poses many challenges to medical image segmentation based on convolutional neural networks (CNNs), mostly due to the broad dynamic range of intensities and the varying number of recorded slices of CT volumes. In this paper, we address these issues with a framework that adds domain-specific data preprocessing and augmentation to state-of-the-art CNN architectures. Our major focus is to stabilise the prediction performance over samples as a mandatory requirement for use in automated and semi-automated workflows in the clinical environment. To validate the architecture-independent effects of our approach we compare a neural architecture based on dilated convolutions for parallel multi-scale processing (a modified Mixed-Scale Dense Network: MS-D Net) to traditional scaling operations (a modified U-Net). Finally, we show that an ensemble model combines the strengths across different individual methods. Our framework is simple to implement into existing deep learning pipelines for CT analysis. It performs well on a range of tasks such as liver and kidney segmentation, without significant differences in prediction performance on strongly differing volume sizes and varying slice thickness. Thus our framework is an essential step towards performing robust segmentation of unknown real-world samples.
Collapse
Affiliation(s)
- Marie Kloenne
- AICURA medical, Bessemerstrasse 22, 12103, Berlin, Germany
- Technische Fakultät, Universität Bielefeld, Universitätsstrasse 25, 33615, Bielefeld, Germany
| | - Sebastian Niehaus
- AICURA medical, Bessemerstrasse 22, 12103, Berlin, Germany
- Institute for Medical Informatics and Biometry, Carl Gustav Carus Faculty of Medicine, Technische Universität Dresden, Fetscherstrasse 74, 01307, Dresden, Germany
| | - Leonie Lampe
- AICURA medical, Bessemerstrasse 22, 12103, Berlin, Germany
| | - Alberto Merola
- AICURA medical, Bessemerstrasse 22, 12103, Berlin, Germany
| | - Janis Reinelt
- AICURA medical, Bessemerstrasse 22, 12103, Berlin, Germany
| | - Ingo Roeder
- Institute for Medical Informatics and Biometry, Carl Gustav Carus Faculty of Medicine, Technische Universität Dresden, Fetscherstrasse 74, 01307, Dresden, Germany
- National Center of Tumor Diseases (NCT) Partner Site Dresden, Fetscherstrasse 74, 01307, Dresden, Germany
| | - Nico Scherf
- Institute for Medical Informatics and Biometry, Carl Gustav Carus Faculty of Medicine, Technische Universität Dresden, Fetscherstrasse 74, 01307, Dresden, Germany.
- Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstrasse 1a, 04103, Leipzig, Germany.
| |
Collapse
|
42
|
Automated Quantification of Photoreceptor alteration in macular disease using Optical Coherence Tomography and Deep Learning. Sci Rep 2020; 10:5619. [PMID: 32221349 PMCID: PMC7101374 DOI: 10.1038/s41598-020-62329-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Accepted: 03/03/2020] [Indexed: 02/03/2023] Open
Abstract
Diabetic macular edema (DME) and retina vein occlusion (RVO) are macular diseases in which central photoreceptors are affected due to pathological accumulation of fluid. Optical coherence tomography allows to visually assess and evaluate photoreceptor integrity, whose alteration has been observed as an important biomarker of both diseases. However, the manual quantification of this layered structure is challenging, tedious and time-consuming. In this paper we introduce a deep learning approach for automatically segmenting and characterising photoreceptor alteration. The photoreceptor layer is segmented using an ensemble of four different convolutional neural networks. En-face representations of the layer thickness are produced to characterize the photoreceptors. The pixel-wise standard deviation of the score maps produced by the individual models is also taken to indicate areas of photoreceptor abnormality or ambiguous results. Experimental results showed that our ensemble is able to produce results in pair with a human expert, outperforming each of its constitutive models. No statistically significant differences were observed between mean thickness estimates obtained from automated and manually generated annotations. Therefore, our model is able to reliable quantify photoreceptors, which can be used to improve prognosis and managment of macular diseases.
Collapse
|
43
|
Estienne T, Lerousseau M, Vakalopoulou M, Alvarez Andres E, Battistella E, Carré A, Chandra S, Christodoulidis S, Sahasrabudhe M, Sun R, Robert C, Talbot H, Paragios N, Deutsch E. Deep Learning-Based Concurrent Brain Registration and Tumor Segmentation. Front Comput Neurosci 2020; 14:17. [PMID: 32265680 PMCID: PMC7100603 DOI: 10.3389/fncom.2020.00017] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 02/11/2020] [Indexed: 01/30/2023] Open
Abstract
Image registration and segmentation are the two most studied problems in medical image analysis. Deep learning algorithms have recently gained a lot of attention due to their success and state-of-the-art results in variety of problems and communities. In this paper, we propose a novel, efficient, and multi-task algorithm that addresses the problems of image registration and brain tumor segmentation jointly. Our method exploits the dependencies between these tasks through a natural coupling of their interdependencies during inference. In particular, the similarity constraints are relaxed within the tumor regions using an efficient and relatively simple formulation. We evaluated the performance of our formulation both quantitatively and qualitatively for registration and segmentation problems on two publicly available datasets (BraTS 2018 and OASIS 3), reporting competitive results with other recent state-of-the-art methods. Moreover, our proposed framework reports significant amelioration (p < 0.005) for the registration performance inside the tumor locations, providing a generic method that does not need any predefined conditions (e.g., absence of abnormalities) about the volumes to be registered. Our implementation is publicly available online at https://github.com/TheoEst/joint_registration_tumor_segmentation.
Collapse
Affiliation(s)
- Théo Estienne
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, Gif-sur-Yvette, France
| | - Marvin Lerousseau
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Maria Vakalopoulou
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, Gif-sur-Yvette, France
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Emilie Alvarez Andres
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| | - Enzo Battistella
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Mathématiques et Informatique pour la Complexité et les Systèmes, Gif-sur-Yvette, France
| | - Alexandre Carré
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| | - Siddhartha Chandra
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Stergios Christodoulidis
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Predictive Biomarkers and Novel Therapeutic Strategies in Oncology, Villejuif, France
| | - Mihir Sahasrabudhe
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Roger Sun
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Charlotte Robert
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| | - Hugues Talbot
- Université Paris-Saclay, CentraleSupélec, Inria, Centre de Vision Numérique, Gif-sur-Yvette, France
| | - Nikos Paragios
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Eric Deutsch
- Gustave Roussy-CentraleSupélec-TheraPanacea Center of Artificial Intelligence in Radiation Therapy and Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Molecular Radiotherapy and Innovative Therapeutics, Villejuif, France
- Gustave Roussy Cancer Campus, Department of Radiation Oncology, Villejuif, France
| |
Collapse
|
44
|
Fehling MK, Grosch F, Schuster ME, Schick B, Lohscheller J. Fully automatic segmentation of glottis and vocal folds in endoscopic laryngeal high-speed videos using a deep Convolutional LSTM Network. PLoS One 2020; 15:e0227791. [PMID: 32040514 PMCID: PMC7010264 DOI: 10.1371/journal.pone.0227791] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Accepted: 12/25/2019] [Indexed: 01/22/2023] Open
Abstract
The objective investigation of the dynamic properties of vocal fold vibrations demands the recording and further quantitative analysis of laryngeal high-speed video (HSV). Quantification of the vocal fold vibration patterns requires as a first step the segmentation of the glottal area within each video frame from which the vibrating edges of the vocal folds are usually derived. Consequently, the outcome of any further vibration analysis depends on the quality of this initial segmentation process. In this work we propose for the first time a procedure to fully automatically segment not only the time-varying glottal area but also the vocal fold tissue directly from laryngeal high-speed video (HSV) using a deep Convolutional Neural Network (CNN) approach. Eighteen different Convolutional Neural Network (CNN) network configurations were trained and evaluated on totally 13,000 high-speed video (HSV) frames obtained from 56 healthy and 74 pathologic subjects. The segmentation quality of the best performing Convolutional Neural Network (CNN) model, which uses Long Short-Term Memory (LSTM) cells to take also the temporal context into account, was intensely investigated on 15 test video sequences comprising 100 consecutive images each. As performance measures the Dice Coefficient (DC) as well as the precisions of four anatomical landmark positions were used. Over all test data a mean Dice Coefficient (DC) of 0.85 was obtained for the glottis and 0.91 and 0.90 for the right and left vocal fold (VF) respectively. The grand average precision of the identified landmarks amounts 2.2 pixels and is in the same range as comparable manual expert segmentations which can be regarded as Gold Standard. The method proposed here requires no user interaction and overcomes the limitations of current semiautomatic or computational expensive approaches. Thus, it allows also for the analysis of long high-speed video (HSV)-sequences and holds the promise to facilitate the objective analysis of vocal fold vibrations in clinical routine. The here used dataset including the ground truth will be provided freely for all scientific groups to allow a quantitative benchmarking of segmentation approaches in future.
Collapse
Affiliation(s)
- Mona Kirstin Fehling
- Department of Computer Science, Trier University of Applied Sciences, Schneidershof, Trier, Germany
| | - Fabian Grosch
- Department of Computer Science, Trier University of Applied Sciences, Schneidershof, Trier, Germany
| | - Maria Elke Schuster
- Department of Otorhinolaryngology and Head and Neck Surgery, University of Munich, Campus Grosshadern, München, Germany
| | - Bernhard Schick
- Department of Otorhinolaryngology, Saarland University Hospital, Homburg/Saar, Germany
| | - Jörg Lohscheller
- Department of Computer Science, Trier University of Applied Sciences, Schneidershof, Trier, Germany
| |
Collapse
|
45
|
Morris ED, Ghanem AI, Dong M, Pantelic MV, Walker EM, Glide-Hurst CK. Cardiac substructure segmentation with deep learning for improved cardiac sparing. Med Phys 2020; 47:576-586. [PMID: 31794054 PMCID: PMC7282198 DOI: 10.1002/mp.13940] [Citation(s) in RCA: 52] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Revised: 10/31/2019] [Accepted: 11/26/2019] [Indexed: 12/13/2022] Open
Abstract
PURPOSE Radiation dose to cardiac substructures is related to radiation-induced heart disease. However, substructures are not considered in radiation therapy planning (RTP) due to poor visualization on CT. Therefore, we developed a novel deep learning (DL) pipeline leveraging MRI's soft tissue contrast coupled with CT for state-of-the-art cardiac substructure segmentation requiring a single, non-contrast CT input. MATERIALS/METHODS Thirty-two left-sided whole-breast cancer patients underwent cardiac T2 MRI and CT-simulation. A rigid cardiac-confined MR/CT registration enabled ground truth delineations of 12 substructures (chambers, great vessels (GVs), coronary arteries (CAs), etc.). Paired MRI/CT data (25 patients) were placed into separate image channels to train a three-dimensional (3D) neural network using the entire 3D image. Deep supervision and a Dice-weighted multi-class loss function were applied. Results were assessed pre/post augmentation and post-processing (3D conditional random field (CRF)). Results for 11 test CTs (seven unique patients) were compared to ground truth and a multi-atlas method (MA) via Dice similarity coefficient (DSC), mean distance to agreement (MDA), and Wilcoxon signed-ranks tests. Three physicians evaluated clinical acceptance via consensus scoring (5-point scale). RESULTS The model stabilized in ~19 h (200 epochs, training error <0.001). Augmentation and CRF increased DSC 5.0 ± 7.9% and 1.2 ± 2.5%, across substructures, respectively. DL provided accurate segmentations for chambers (DSC = 0.88 ± 0.03), GVs (DSC = 0.85 ± 0.03), and pulmonary veins (DSC = 0.77 ± 0.04). Combined DSC for CAs was 0.50 ± 0.14. MDA across substructures was <2.0 mm (GV MDA = 1.24 ± 0.31 mm). No substructures had statistical volume differences (P > 0.05) to ground truth. In four cases, DL yielded left main CA contours, whereas MA segmentation failed, and provided improved consensus scores in 44/60 comparisons to MA. DL provided clinically acceptable segmentations for all graded patients for 3/4 chambers. DL contour generation took ~14 s per patient. CONCLUSIONS These promising results suggest DL poses major efficiency and accuracy gains for cardiac substructure segmentation offering high potential for rapid implementation into RTP for improved cardiac sparing.
Collapse
Affiliation(s)
- Eric D. Morris
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
- Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, MI, USA
| | - Ahmed I. Ghanem
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
- Department of Clinical Oncology, Alexandria University, Alexandria, Egypt
| | - Ming Dong
- Department of Computer Science, Wayne State University, Detroit, MI, USA
| | - Milan V. Pantelic
- Department of Radiology, Henry Ford Cancer Institute, Detroit, MI, USA
| | - Eleanor M. Walker
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
| | - Carri K. Glide-Hurst
- Department of Radiation Oncology, Henry Ford Cancer Institute, Detroit, MI, USA
- Department of Radiation Oncology, Wayne State University School of Medicine, Detroit, MI, USA
| |
Collapse
|
46
|
Banerjee S, Mitra S. Novel Volumetric Sub-region Segmentation in Brain Tumors. Front Comput Neurosci 2020; 14:3. [PMID: 32038216 PMCID: PMC6993215 DOI: 10.3389/fncom.2020.00003] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2019] [Accepted: 01/08/2020] [Indexed: 11/13/2022] Open
Abstract
A novel deep learning based model called Multi-Planar Spatial Convolutional Neural Network (MPS-CNN) is proposed for effective, automated segmentation of different sub-regions viz. peritumoral edema (ED), necrotic core (NCR), enhancing and non-enhancing tumor core (ET/NET), from multi-modal MR images of the brain. An encoder-decoder type CNN model is designed for pixel-wise segmentation of the tumor along three anatomical planes (axial, sagittal, and coronal) at the slice level. These are then combined, by incorporating a consensus fusion strategy with a fully connected Conditional Random Field (CRF) based post-refinement, to produce the final volumetric segmentation of the tumor and its constituent sub-regions. Concepts, such as spatial-pooling and unpooling are used to preserve the spatial locations of the edge pixels, for reducing segmentation error around the boundaries. A new aggregated loss function is also developed for effectively handling data imbalance. The MPS-CNN is trained and validated on the recent Multimodal Brain Tumor Segmentation Challenge (BraTS) 2018 dataset. The Dice scores obtained for the validation set for whole tumor (WT :NCR/NE +ET +ED), tumor core (TC:NCR/NET +ET), and enhancing tumor (ET) are 0.90216, 0.87247, and 0.82445. The proposed MPS-CNN is found to perform the best (based on leaderboard scores) for ET and TC segmentation tasks, in terms of both the quantitative measures (viz. Dice and Hausdorff). In case of the WT segmentation it also achieved the second highest accuracy, with a score which was only 1% less than that of the best performing method.
Collapse
Affiliation(s)
- Subhashis Banerjee
- Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
- Department of CSE, University of Calcutta, Kolkata, India
| | - Sushmita Mitra
- Machine Intelligence Unit, Indian Statistical Institute, Kolkata, India
| |
Collapse
|
47
|
Liu Z, Liu X, Xiao B, Wang S, Miao Z, Sun Y, Zhang F. Segmentation of organs-at-risk in cervical cancer CT images with a convolutional neural network. Phys Med 2020; 69:184-191. [PMID: 31918371 DOI: 10.1016/j.ejmp.2019.12.008] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/16/2019] [Revised: 11/12/2019] [Accepted: 12/08/2019] [Indexed: 02/06/2023] Open
Abstract
PURPOSE We introduced and evaluated an end-to-end organs-at-risk (OARs) segmentation model that can provide accurate and consistent OARs segmentation results in much less time. METHODS We collected 105 patients' Computed Tomography (CT) scans that diagnosed locally advanced cervical cancer and treated with radiotherapy in one hospital. Seven organs, including the bladder, bone marrow, left femoral head, right femoral head, rectum, small intestine and spinal cord were defined as OARs. The annotated contours of the OARs previously delineated manually by the patient's radiotherapy oncologist and confirmed by the professional committee consisted of eight experienced oncologists before the radiotherapy were used as the ground truth masks. A multi-class segmentation model based on U-Net was designed to fulfil the OARs segmentation task. The Dice Similarity Coefficient (DSC) and 95th Hausdorff Distance (HD) are used as quantitative evaluation metrics to evaluate the proposed method. RESULTS The mean DSC values of the proposed method are 0.924, 0.854, 0.906, 0.900, 0.791, 0.833 and 0.827 for the bladder, bone marrow, femoral head left, femoral head right, rectum, small intestine, and spinal cord, respectively. The mean HD values are 5.098, 1.993, 1.390, 1.435, 5.949, 5.281 and 3.269 for the above OARs respectively. CONCLUSIONS Our proposed method can help reduce the inter-observer and intra-observer variability of manual OARs delineation and lessen oncologists' efforts. The experimental results demonstrate that our model outperforms the benchmark U-Net model and the oncologists' evaluations show that the segmentation results are highly acceptable to be used in radiation therapy planning.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Bin Xiao
- MedMind Technology Co., Ltd., Beijing 100080, China
| | - Shaobin Wang
- MedMind Technology Co., Ltd., Beijing 100080, China
| | - Zheng Miao
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Yuliang Sun
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing 100730, China
| |
Collapse
|
48
|
Two-Stage Cascaded U-Net: 1st Place Solution to BraTS Challenge 2019 Segmentation Task. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2020. [DOI: 10.1007/978-3-030-46640-4_22] [Citation(s) in RCA: 96] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
49
|
Sreerangappa M, Suresh M, Jayadevappa D. Segmentation of Brain Tumor and Performance Evaluation Using Spatial FCM and Level Set Evolution. Open Biomed Eng J 2019. [DOI: 10.2174/1874120701913010134] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Background:
In recent years, brain tumor is one of the major causes of death in human beings. The survival rate can be increased if the tumor is diagnosed accurately in the early stage. Hence, medical image segmentation is always a challenging task of any problem in computer guided medical procedures in hospitals. The main objective of the segmentation process is to obtain object of interest from the given image so that it can be represented in a meaningful way for further analysis.
Methods:
To improve the segmentation accuracy, an efficient segmentation method which combines a spatial fuzzy c-means and level sets is proposed in this paper.
Results:
The experiment is conducted using brain web and DICOM database. After pre-processing of an MR image, a spatial FCM algorithm is applied. The SFCM utilizes spatial data from the neighbourhood of each pixel to represent clusters. Finally, these clusters are segmented using level set active contour model for the tumor boundary. The performance of the proposed algorithm is evaluated using various performance metrics.
Conclusion:
In this technique, wavelets and spatial FCM are applied before segmenting the brain tumor by level sets. The qualitative results show more accurate detection of tumor boundary and better convergence rate of the contour as compared to other segmentation techniques. The proposed segmentation frame work is also compared with two automatic segmentation techniques developed recently. The quantitative results of the proposed method summarize the improvements in segmentation accuracy, sensitivity and specificity.
Collapse
|
50
|
Pereira S, Pinto A, Amorim J, Ribeiro A, Alves V, Silva CA. Adaptive Feature Recombination and Recalibration for Semantic Segmentation With Fully Convolutional Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2914-2925. [PMID: 31135354 DOI: 10.1109/tmi.2019.2918096] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Fully convolutional networks have been achieving remarkable results in image semantic segmentation, while being efficient. Such efficiency results from the capability of segmenting several voxels in a single forward pass. So, there is a direct spatial correspondence between a unit in a feature map and the voxel in the same location. In a convolutional layer, the kernel spans over all channels and extracts information from them. We observe that linear recombination of feature maps by increasing the number of channels followed by compression may enhance their discriminative power. Moreover, not all feature maps have the same relevance for the classes being predicted. In order to learn the inter-channel relationships and recalibrate the channels to suppress the less relevant ones, squeeze and excitation blocks were proposed in the context of image classification with convolutional neural networks. However, this is not well adapted for segmentation with fully convolutional networks since they segment several objects simultaneously, hence a feature map may contain relevant information only in some locations. In this paper, we propose recombination of features and a spatially adaptive recalibration block that is adapted for semantic segmentation with fully convolutional networks- the SegSE block. Feature maps are recalibrated by considering the cross-channel information together with spatial relevance. The experimental results indicate that recombination and recalibration improve the results of a competitive baseline, and generalize across three different problems: brain tumor segmentation, stroke penumbra estimation, and ischemic stroke lesion outcome prediction. The obtained results are competitive or outperform the state of the art in the three applications.
Collapse
|