1
|
Zhou R, Wang J, Xia G, Xing J, Shen H, Shen X. Cascade Residual Multiscale Convolution and Mamba-Structured UNet for Advanced Brain Tumor Image Segmentation. ENTROPY (BASEL, SWITZERLAND) 2024; 26:385. [PMID: 38785634 PMCID: PMC11120374 DOI: 10.3390/e26050385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2024] [Revised: 04/21/2024] [Accepted: 04/29/2024] [Indexed: 05/25/2024]
Abstract
In brain imaging segmentation, precise tumor delineation is crucial for diagnosis and treatment planning. Traditional approaches include convolutional neural networks (CNNs), which struggle with processing sequential data, and transformer models that face limitations in maintaining computational efficiency with large-scale data. This study introduces MambaBTS: a model that synergizes the strengths of CNNs and transformers, is inspired by the Mamba architecture, and integrates cascade residual multi-scale convolutional kernels. The model employs a mixed loss function that blends dice loss with cross-entropy to refine segmentation accuracy effectively. This novel approach reduces computational complexity, enhances the receptive field, and demonstrates superior performance for accurately segmenting brain tumors in MRI images. Experiments on the MICCAI BraTS 2019 dataset show that MambaBTS achieves dice coefficients of 0.8450 for the whole tumor (WT), 0.8606 for the tumor core (TC), and 0.7796 for the enhancing tumor (ET) and outperforms existing models in terms of accuracy, computational efficiency, and parameter efficiency. These results underscore the model's potential to offer a balanced, efficient, and effective segmentation method, overcoming the constraints of existing models and promising significant improvements in clinical diagnostics and planning.
Collapse
Affiliation(s)
- Rui Zhou
- School of Zhang Jian, Nantong University, Nantong 226019, China; (R.Z.); (G.X.); (J.X.)
| | - Ju Wang
- School of Information Science and Technology, Nantong University, Nantong 226019, China;
| | - Guijiang Xia
- School of Zhang Jian, Nantong University, Nantong 226019, China; (R.Z.); (G.X.); (J.X.)
| | - Jingyang Xing
- School of Zhang Jian, Nantong University, Nantong 226019, China; (R.Z.); (G.X.); (J.X.)
| | - Hongming Shen
- School of Microelectronics and School of Integrated Circuits, Nantong University, Nantong 226019, China
| | - Xiaoyan Shen
- School of Information Science and Technology, Nantong University, Nantong 226019, China;
- Nantong Research Institute for Advanced Communication Technologies, Nantong University, Nantong 226019, China
| |
Collapse
|
2
|
Burrows L, Patel J, Islim AI, Jenkinson MD, Mills SJ, Chen K. A semi-automatic segmentation method for meningioma developed using a variational approach model. Neuroradiol J 2024; 37:199-205. [PMID: 38146866 DOI: 10.1177/19714009231224442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/27/2023] Open
Abstract
BACKGROUND Meningioma is the commonest primary brain tumour. Volumetric post-contrast magnetic resonance imaging (MRI) is recognised as gold standard for delineation of meningioma volume but is hindered by manual processing times. We aimed to investigate the utility of a model-based variational approach in segmenting meningioma. METHODS A database of patients with a meningioma (2007-2015) was queried for patients with a contrast-enhanced volumetric MRI, who had consented to a research tissue biobank. Manual segmentation by a neuroradiologist was performed and results were compared to the mathematical model, using a battery of tests including the Sørensen-Dice coefficient (DICE) and JACCARD index. A publicly available meningioma dataset (708 segmented T1 contrast-enhanced slices) was also used to test the reliability of the model. RESULTS 49 meningioma cases were included. The most common meningioma location was convexity (n = 15, 30.6%). The mathematical model segmented all but one incidental meningioma, which failed due to the lack of contrast uptake. The median meningioma volume by manual segmentation was 19.0 cm3 (IQR 4.9-31.2). The median meningioma volume using the mathematical model was 16.9 cm3 (IQR 4.6-28.34). The mean DICE score was 0.90 (SD = 0.04). The mean JACCARD index was 0.82 (SD = 0.07). For the publicly available dataset, the mean DICE and JACCARD scores were 0.90 (SD = 0.06) and 0.82 (SD = 0.10), respectively. CONCLUSIONS Segmentation of meningioma volume using the proposed mathematical model was possible with accurate results. Application of this model on contrast-enhanced volumetric imaging may help reduce work burden on neuroradiologists with the increasing number in meningioma diagnoses.
Collapse
Affiliation(s)
- Liam Burrows
- Department of Mathematical Sciences and Centre for Mathematical Imaging Techniques, University of Liverpool, UK
| | - Jay Patel
- Department of Neuroradiology, The Walton Centre NHS Foundation Trust, UK
| | - Abdurrahman I Islim
- Geoffrey Jefferson Brain Research Centre, The Manchester Academic Health Science Centre, Northern Care Alliance NHS Group, University of Manchester, UK
- Department of Neurosurgery, Manchester Centre for Clinical Neurosciences, Salford Royal Hospital, Northren Care Alliance NHS Foundation Trust, UK
| | - Michael D Jenkinson
- Department of Neurosurgery, The Walton Centre NHS Foundation Trust, UK
- Department of Pharmacology and Therapeutics, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, UK
| | - Samantha J Mills
- Department of Neuroradiology, The Walton Centre NHS Foundation Trust, UK
- Department of Pharmacology and Therapeutics, Institute of Systems, Molecular and Integrative Biology, University of Liverpool, UK
| | - Ke Chen
- Department of Mathematical Sciences and Centre for Mathematical Imaging Techniques, University of Liverpool, UK
- Department of Mathematics and Statistics, University of Strathclyde, UK
| |
Collapse
|
3
|
Boehringer AS, Sanaat A, Arabi H, Zaidi H. An active learning approach to train a deep learning algorithm for tumor segmentation from brain MR images. Insights Imaging 2023; 14:141. [PMID: 37620554 PMCID: PMC10449747 DOI: 10.1186/s13244-023-01487-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 07/22/2023] [Indexed: 08/26/2023] Open
Abstract
PURPOSE This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. METHODS The publicly available training dataset provided for the 2021 RSNA-ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge was used in this study, consisting of 1251 multi-institutional, multi-parametric MR images. Post-contrast T1, T2, and T2 FLAIR images as well as ground truth manual segmentation were used as input for the model. The data were split into a training set of 1151 cases and testing set of 100 cases, with the testing set remaining constant throughout. Deep convolutional neural network segmentation models were trained using the NiftyNet platform. To test the viability of active learning in training a segmentation model, an initial reference model was trained using all 1151 training cases followed by two additional models using only 575 cases and 100 cases. The resulting predicted segmentations of these two additional models on the remaining training cases were then addended to the training dataset for additional training. RESULTS It was demonstrated that an active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas (0.906 reference Dice score vs 0.868 active learning Dice score) while only requiring manual annotation for 28.6% of the data. CONCLUSION The active learning approach when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data. CRITICAL RELEVANCE STATEMENT Active learning concepts were applied to a deep learning-assisted segmentation of brain gliomas from MR images to assess their viability in reducing the required amount of manually annotated ground truth data in model training. KEY POINTS • This study focuses on assessing the performance of active learning techniques to train a brain MRI glioma segmentation model. • The active learning approach for manual segmentation can lead to comparable model performance for segmentation of brain gliomas. • Active learning when applied to model training can drastically reduce the time and labor spent on preparation of ground truth training data.
Collapse
Affiliation(s)
- Andrew S Boehringer
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Amirhossein Sanaat
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Hossein Arabi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland
| | - Habib Zaidi
- Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1205, Geneva, Switzerland.
- Geneva University Neurocenter, University of Geneva, CH-1211, Geneva, Switzerland.
- Department of Nuclear Medicine and Molecular Imaging, University of Groningen, Groningen, Netherlands.
- Department of Nuclear Medicine, University of Southern Denmark, Odense, Denmark.
| |
Collapse
|
4
|
Paudyal R, Shah AD, Akin O, Do RKG, Konar AS, Hatzoglou V, Mahmood U, Lee N, Wong RJ, Banerjee S, Shin J, Veeraraghavan H, Shukla-Dave A. Artificial Intelligence in CT and MR Imaging for Oncological Applications. Cancers (Basel) 2023; 15:cancers15092573. [PMID: 37174039 PMCID: PMC10177423 DOI: 10.3390/cancers15092573] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 04/13/2023] [Accepted: 04/17/2023] [Indexed: 05/15/2023] Open
Abstract
Cancer care increasingly relies on imaging for patient management. The two most common cross-sectional imaging modalities in oncology are computed tomography (CT) and magnetic resonance imaging (MRI), which provide high-resolution anatomic and physiological imaging. Herewith is a summary of recent applications of rapidly advancing artificial intelligence (AI) in CT and MRI oncological imaging that addresses the benefits and challenges of the resultant opportunities with examples. Major challenges remain, such as how best to integrate AI developments into clinical radiology practice, the vigorous assessment of quantitative CT and MR imaging data accuracy, and reliability for clinical utility and research integrity in oncology. Such challenges necessitate an evaluation of the robustness of imaging biomarkers to be included in AI developments, a culture of data sharing, and the cooperation of knowledgeable academics with vendor scientists and companies operating in radiology and oncology fields. Herein, we will illustrate a few challenges and solutions of these efforts using novel methods for synthesizing different contrast modality images, auto-segmentation, and image reconstruction with examples from lung CT as well as abdome, pelvis, and head and neck MRI. The imaging community must embrace the need for quantitative CT and MRI metrics beyond lesion size measurement. AI methods for the extraction and longitudinal tracking of imaging metrics from registered lesions and understanding the tumor environment will be invaluable for interpreting disease status and treatment efficacy. This is an exciting time to work together to move the imaging field forward with narrow AI-specific tasks. New AI developments using CT and MRI datasets will be used to improve the personalized management of cancer patients.
Collapse
Affiliation(s)
- Ramesh Paudyal
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Akash D Shah
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Oguz Akin
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard K G Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amaresha Shridhar Konar
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Vaios Hatzoglou
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Usman Mahmood
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Nancy Lee
- Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Richard J Wong
- Head and Neck Service, Department of Surgery, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | | | | | - Harini Veeraraghavan
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| | - Amita Shukla-Dave
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York City, NY 10065, USA
| |
Collapse
|
5
|
A survey of deep learning for MRI brain tumor segmentation methods: Trends, challenges, and future directions. HEALTH AND TECHNOLOGY 2023. [DOI: 10.1007/s12553-023-00737-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/28/2023]
|
6
|
Prabhudesai S, Hauth J, Guo D, Rao A, Banovic N, Huan X. Lowering the computational barrier: Partially Bayesian neural networks for transparency in medical imaging AI. FRONTIERS IN COMPUTER SCIENCE 2023. [DOI: 10.3389/fcomp.2023.1071174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023] Open
Abstract
Deep Neural Networks (DNNs) can provide clinicians with fast and accurate predictions that are highly valuable for high-stakes medical decision-making, such as in brain tumor segmentation and treatment planning. However, these models largely lack transparency about the uncertainty in their predictions, potentially giving clinicians a false sense of reliability that may lead to grave consequences in patient care. Growing calls for Transparent and Responsible AI have promoted Uncertainty Quantification (UQ) to capture and communicate uncertainty in a systematic and principled manner. However, traditional Bayesian UQ methods remain prohibitively costly for large, million-dimensional tumor segmentation DNNs such as the U-Net. In this work, we discuss a computationally-efficient UQ approach via the partially Bayesian neural networks (pBNN). In pBNN, only a single layer, strategically selected based on gradient-based sensitivity analysis, is targeted for Bayesian inference. We illustrate the effectiveness of pBNN in capturing the full uncertainty for a 7.8-million parameter U-Net. We also demonstrate how practitioners and model developers can use the pBNN's predictions to better understand the model's capabilities and behavior.
Collapse
|
7
|
Balaha HM, Hassan AES. A variate brain tumor segmentation, optimization, and recognition framework. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10337-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
8
|
Beeche C, Singh JP, Leader JK, Gezer S, Oruwari AP, Dansingani KK, Chhablani J, Pu J. Super U-Net: a modularized generalizable architecture. PATTERN RECOGNITION 2022; 128:108669. [PMID: 35528144 PMCID: PMC9070860 DOI: 10.1016/j.patcog.2022.108669] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
OBJECTIVE To develop and validate a novel convolutional neural network (CNN) termed "Super U-Net" for medical image segmentation. METHODS Super U-Net integrates a dynamic receptive field module and a fusion upsampling module into the classical U-Net architecture. The model was developed and tested to segment retinal vessels, gastrointestinal (GI) polyps, skin lesions on several image types (i.e., fundus images, endoscopic images, dermoscopic images). We also trained and tested the traditional U-Net architecture, seven U-Net variants, and two non-U-Net segmentation architectures. K-fold cross-validation was used to evaluate performance. The performance metrics included Dice similarity coefficient (DSC), accuracy, positive predictive value (PPV), and sensitivity. RESULTS Super U-Net achieved average DSCs of 0.808±0.0210, 0.752±0.019, 0.804±0.239, and 0.877±0.135 for segmenting retinal vessels, pediatric retinal vessels, GI polyps, and skin lesions, respectively. The Super U-net consistently outperformed U-Net, seven U-Net variants, and two non-U-Net segmentation architectures (p < 0.05). CONCLUSION Dynamic receptive fields and fusion upsampling can significantly improve image segmentation performance.
Collapse
Affiliation(s)
- Cameron Beeche
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Jatin P Singh
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Joseph K Leader
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Sinem Gezer
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Amechi P Oruwari
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Kunal K Dansingani
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Jay Chhablani
- Department of Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Jiantao Pu
- Department of Radiology, University of Pittsburgh, Pittsburgh, PA 15213, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| |
Collapse
|
9
|
A Hybrid Deep Learning Model for Brain Tumour Classification. ENTROPY 2022; 24:e24060799. [PMID: 35741521 PMCID: PMC9222774 DOI: 10.3390/e24060799] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/08/2022] [Revised: 06/03/2022] [Accepted: 06/04/2022] [Indexed: 11/16/2022]
Abstract
A brain tumour is one of the major reasons for death in humans, and it is the tenth most common type of tumour that affects people of all ages. However, if detected early, it is one of the most treatable types of tumours. Brain tumours are classified using biopsy, which is not usually performed before definitive brain surgery. An image classification technique for tumour diseases is important for accelerating the treatment process and avoiding surgery and errors from manual diagnosis by radiologists. The advancement of technology and machine learning (ML) can assist radiologists in tumour diagnostics using magnetic resonance imaging (MRI) images without invasive procedures. This work introduced a new hybrid CNN-based architecture to classify three brain tumour types through MRI images. The method suggested in this paper uses hybrid deep learning classification based on CNN with two methods. The first method combines a pre-trained Google-Net model of the CNN algorithm for feature extraction with SVM for pattern classification. The second method integrates a finely tuned Google-Net with a soft-max classifier. The proposed approach was evaluated using MRI brain images that contain a total of 1426 glioma images, 708 meningioma images, 930 pituitary tumour images, and 396 normal brain images. The reported results showed that an accuracy of 93.1% was achieved from the finely tuned Google-Net model. However, the synergy of Google-Net as a feature extractor with an SVM classifier improved recognition accuracy to 98.1%.
Collapse
|
10
|
Raman AG, Jones C, Weiss CR. Machine Learning for Hepatocellular Carcinoma Segmentation at MRI: Radiology In Training. Radiology 2022; 304:509-515. [PMID: 35536132 DOI: 10.1148/radiol.212386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
A 68-year-old woman with a history of hepatocellular carcinoma underwent conventional transarterial chemoembolization. Manual tumor segmentation on images, which can be used to assess disease progression, is time consuming and may suffer from interobserver reliability issues. The authors present a how-to guide to develop machine learning algorithms for fully automatic segmentation of hepatocellular carcinoma and other tumors for lesion tracking over time.
Collapse
Affiliation(s)
- Alex G Raman
- From the Western University of Health Sciences, College of Osteopathic Medicine of the Pacific, 309 E 2nd St, Pomona, CA 91766 (A.G.R.); Department of Computer Science, Malone Center for Engineering in Healthcare, Johns Hopkins University, Balrimore, Md (C.J.); and Department of Radiology and Radiologic Science, Division of Interventional Radiology, Johns Hopkins Hospital, Baltimore, MD (C.R.W.)
| | - Craig Jones
- From the Western University of Health Sciences, College of Osteopathic Medicine of the Pacific, 309 E 2nd St, Pomona, CA 91766 (A.G.R.); Department of Computer Science, Malone Center for Engineering in Healthcare, Johns Hopkins University, Balrimore, Md (C.J.); and Department of Radiology and Radiologic Science, Division of Interventional Radiology, Johns Hopkins Hospital, Baltimore, MD (C.R.W.)
| | - Clifford R Weiss
- From the Western University of Health Sciences, College of Osteopathic Medicine of the Pacific, 309 E 2nd St, Pomona, CA 91766 (A.G.R.); Department of Computer Science, Malone Center for Engineering in Healthcare, Johns Hopkins University, Balrimore, Md (C.J.); and Department of Radiology and Radiologic Science, Division of Interventional Radiology, Johns Hopkins Hospital, Baltimore, MD (C.R.W.)
| |
Collapse
|
11
|
Chen S, Zhao S, Lan Q. Residual Block Based Nested U-Type Architecture for Multi-Modal Brain Tumor Image Segmentation. Front Neurosci 2022; 16:832824. [PMID: 35356052 PMCID: PMC8959850 DOI: 10.3389/fnins.2022.832824] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Accepted: 02/03/2022] [Indexed: 11/26/2022] Open
Abstract
Multi-modal magnetic resonance imaging (MRI) segmentation of brain tumors is a hot topic in brain tumor processing research in recent years, which can make full use of the feature information of different modalities in MRI images, so that tumors can be segmented more effectively. In this article, convolutional neural networks (CNN) is used as a tool to improve the efficiency and effectiveness of segmentation. Based on this, Dense-ResUNet, a multi-modal MRI image segmentation model for brain tumors is created. The Dense-ResUNet consists of a series of nested dense convolutional blocks and a U-Net shaped model with residual connections. The nested dense convolutional blocks can bridge the semantic disparity between the feature maps of the encoder and decoder before fusion and make full use of different levels of features. The residual blocks and skip connection can extract pixel information from the image and skip the link to solve the traditional deep traditional CNN network problem. The experiment results show that our Dense-ResUNet can effectively help to extract the brain tumor and has great clinical research and application value.
Collapse
Affiliation(s)
- Sirui Chen
- School of Software Engineering, Tongji University, Shanghai, China
| | - Shengjie Zhao
- School of Software Engineering, Tongji University, Shanghai, China
- *Correspondence: Shengjie Zhao
| | - Quan Lan
- Department of Neurology, First Affiliated Hospital of Xiamen University, Xiamen, China
- Quan Lan
| |
Collapse
|
12
|
Xia T, Kumar A, Fulham M, Feng D, Wang Y, Kim EY, Jung Y, Kim J. Fused feature signatures to probe tumour radiogenomics relationships. Sci Rep 2022; 12:2173. [PMID: 35140267 PMCID: PMC8828715 DOI: 10.1038/s41598-022-06085-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 01/14/2022] [Indexed: 11/09/2022] Open
Abstract
Radiogenomics relationships (RRs) aims to identify statistically significant correlations between medical image features and molecular characteristics from analysing tissue samples. Previous radiogenomics studies mainly relied on a single category of image feature extraction techniques (ETs); these are (i) handcrafted ETs that encompass visual imaging characteristics, curated from knowledge of human experts and, (ii) deep ETs that quantify abstract-level imaging characteristics from large data. Prior studies therefore failed to leverage the complementary information that are accessible from fusing the ETs. In this study, we propose a fused feature signature (FFSig): a selection of image features from handcrafted and deep ETs (e.g., transfer learning and fine-tuning of deep learning models). We evaluated the FFSig's ability to better represent RRs compared to individual ET approaches with two public datasets: the first dataset was used to build the FFSig using 89 patients with non-small cell lung cancer (NSCLC) comprising of gene expression data and CT images of the thorax and the upper abdomen for each patient; the second NSCLC dataset comprising of 117 patients with CT images and RNA-Seq data and was used as the validation set. Our results show that our FFSig encoded complementary imaging characteristics of tumours and identified more RRs with a broader range of genes that are related to important biological functions such as tumourigenesis. We suggest that the FFSig has the potential to identify important RRs that may assist cancer diagnosis and treatment in the future.
Collapse
Affiliation(s)
- Tian Xia
- School of Computer Science, Faculty of Engineering, The University of Sydney, Sydney, NSW, 2006, Australia.
| | - Ashnil Kumar
- School of Biomedical Engineering, Faculty of Engineering, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Michael Fulham
- Department of Molecular Imaging, Royal Prince Alfred Hospital, Camperdown, NSW, 2050, Australia
| | - Dagan Feng
- School of Computer Science, Faculty of Engineering, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Yue Wang
- Department of Electrical and Computer Engineering, Virginia Polytechnic Institute and State University, Arlington, VA, 22203, USA
| | - Eun Young Kim
- Department of Radiology, Gil Medical Center, Gachon University College of Medicine, Incheon, Republic of Korea
| | - Younhyun Jung
- School of Computing, Gachon University, Seongnam, Republic of Korea
| | - Jinman Kim
- School of Computer Science, Faculty of Engineering, The University of Sydney, Sydney, NSW, 2006, Australia
| |
Collapse
|
13
|
MTANS: Multi-Scale Mean Teacher Combined Adversarial Network with Shape-Aware Embedding for Semi-Supervised Brain Lesion Segmentation. Neuroimage 2021; 244:118568. [PMID: 34508895 DOI: 10.1016/j.neuroimage.2021.118568] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2021] [Accepted: 09/07/2021] [Indexed: 11/23/2022] Open
Abstract
The annotation of brain lesion images is a key step in clinical diagnosis and treatment of a wide spectrum of brain diseases. In recent years, segmentation methods based on deep learning have gained unprecedented popularity, leveraging a large amount of data with high-quality voxel-level annotations. However, due to the limited time clinicians can provide for the cumbersome task of manual image segmentation, semi-supervised medical image segmentation methods present an alternative solution as they require only a few labeled samples for training. In this paper, we propose a novel semi-supervised segmentation framework that combines improved mean teacher and adversarial network. Specifically, our framework consists of (i) a student model and a teacher model for segmenting the target and generating the signed distance maps of object surfaces, and (ii) a discriminator network for extracting hierarchical features and distinguishing the signed distance maps of labeled and unlabeled data. Besides, based on two different adversarial learning processes, a multi-scale feature consistency loss derived from the student and teacher models is proposed, and a shape-aware embedding scheme is integrated into our framework. We evaluated the proposed method on the public brain lesion datasets from ISBI 2015, ISLES 2015, and BRATS 2018 for the multiple sclerosis lesion, ischemic stroke lesion, and brain tumor segmentation respectively. Experiments demonstrate that our method can effectively leverage unlabeled data while outperforming the supervised baseline and other state-of-the-art semi-supervised methods trained with the same labeled data. The proposed framework is suitable for joint training of limited labeled data and additional unlabeled data, which is expected to reduce the effort of obtaining annotated images.
Collapse
|
14
|
Huang S, Cheng Z, Lai L, Zheng W, He M, Li J, Zeng T, Huang X, Yang X. Integrating multiple MRI sequences for pelvic organs segmentation via the attention mechanism. Med Phys 2021; 48:7930-7945. [PMID: 34658035 DOI: 10.1002/mp.15285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 08/30/2021] [Accepted: 09/14/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE To create a network which fully utilizes multi-sequence MRI and compares favorably with manual human contouring. METHODS We retrospectively collected 89 MRI studies of the pelvic cavity from patients with prostate cancer and cervical cancer. The dataset contained 89 samples from 87 patients with a total of 84 valid samples. MRI was performed with T1-weighted (T1), T2-weighted (T2), and Enhanced Dixon T1-weighted (T1DIXONC) sequences. There were two cohorts. The training cohort contained 55 samples and the testing cohort contained 29 samples. The MRI images in the training cohort contained contouring data from radiotherapist α. The MRI images in the testing cohort contained contouring data from radiotherapist α and contouring data from another radiotherapist: radiotherapist β. The training cohort was used to optimize the convolution neural networks, which included the attention mechanism through the proposed activation module and the blended module into multiple MRI sequences, to perform autodelineation. The testing cohort was used to assess the networks' autodelineation performance. The contoured organs at risk (OAR) were the anal canal, bladder, rectum, femoral head (L), and femoral head (R). RESULTS We compared our proposed network with UNet and FuseUNet using our dataset. When T1 was the main sequence, we input three sequences to segment five organs and evaluated the results using four metrics: the DSC (Dice similarity coefficient), the JSC (Jaccard similarity coefficient), the ASD (average mean distance), and the 95% HD (robust Hausdorff distance). The proposed network achieved improved results compared with the baselines among all metrics. The DSC were 0.834±0.029, 0.818±0.037, and 0.808±0.050 for our proposed network, FuseUNet, and UNet, respectively. The 95% HD were 7.256±2.748 mm, 8.404±3.297 mm, and 8.951±4.798 mm for our proposed network, FuseUNet, and UNet, respectively. Our proposed network also had superior performance on the JSC and ASD coefficients. CONCLUSION Our proposed activation module and blended module significantly improved the performance of FuseUNet for multi-sequence MRI segmentation. Our proposed network integrated multiple MRI sequences efficiently and autosegmented OAR rapidly and accurately. We also discovered that three-sequence fusion (T1-T1DIXONC-T2) was superior to two-sequence fusion (T1-T2 and T1-T1DIXONC, respectively). We infer that the more MRI sequences fused, the better the automatic segmentation results.
Collapse
Affiliation(s)
- Sijuan Huang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| | - Zesen Cheng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China.,School of Electronic and Information Engineering, South China University of Technology, Guangzhou, Guangdong, 510641, China
| | - Lijuan Lai
- School of Electronic and Information Engineering, South China University of Technology, Guangzhou, Guangdong, 510641, China
| | - Wanjia Zheng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China.,Department of Radiation Oncology, Southern Theater Air Force Hospital of the People's Liberation Army, Guangzhou, Guangdong, 510050, China
| | - Mengxue He
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| | - Junyun Li
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| | - Tianyu Zeng
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China.,School of Electronic and Information Engineering, South China University of Technology, Guangzhou, Guangdong, 510641, China
| | - Xiaoyan Huang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| | - Xin Yang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| |
Collapse
|
15
|
Huang D, Wang M, Zhang L, Li H, Ye M, Li A. Learning rich features with hybrid loss for brain tumor segmentation. BMC Med Inform Decis Mak 2021; 21:63. [PMID: 34330265 PMCID: PMC8323198 DOI: 10.1186/s12911-021-01431-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2021] [Accepted: 02/09/2021] [Indexed: 11/10/2022] Open
Abstract
Background Accurately segment the tumor region of MRI images is important for brain tumor diagnosis and radiotherapy planning. At present, manual segmentation is wildly adopted in clinical and there is a strong need for an automatic and objective system to alleviate the workload of radiologists. Methods We propose a parallel multi-scale feature fusing architecture to generate rich feature representation for accurate brain tumor segmentation. It comprises two parts: (1) Feature Extraction Network (FEN) for brain tumor feature extraction at different levels and (2) Multi-scale Feature Fusing Network (MSFFN) for merge all different scale features in a parallel manner. In addition, we use two hybrid loss functions to optimize the proposed network for the class imbalance issue. Results We validate our method on BRATS 2015, with 0.86, 0.73 and 0.61 in Dice for the three tumor regions (complete, core and enhancing), and the model parameter size is only 6.3 MB. Without any post-processing operations, our method still outperforms published state-of-the-arts methods on the segmentation results of complete tumor regions and obtains competitive performance in another two regions. Conclusions The proposed parallel structure can effectively fuse multi-level features to generate rich feature representation for high-resolution results. Moreover, the hybrid loss functions can alleviate the class imbalance issue and guide the training process. The proposed method can be used in other medical segmentation tasks.
Collapse
Affiliation(s)
- Daobin Huang
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China.,School of Medical Information, Wannan Medical College, Wuhu, 241002, China.,Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu, 241002, China
| | - Minghui Wang
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China
| | - Ling Zhang
- Department of Biochemistry, Wannan Medical College, Wuhu, 241002, China
| | - Haichun Li
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China
| | - Minquan Ye
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China. .,Research Center of Health Big Data Mining and Applications, Wannan Medical College, Wuhu, 241002, China.
| | - Ao Li
- School of Information Science and Technology, and Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, 230027, China.
| |
Collapse
|
16
|
Joint image and feature adaptative attention-aware networks for cross-modality semantic segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06064-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
17
|
|
18
|
Application of deep learning for automatic segmentation of brain tumors on magnetic resonance imaging: a heuristic approach in the clinical scenario. Neuroradiology 2021; 63:1253-1262. [PMID: 33501512 DOI: 10.1007/s00234-021-02649-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 01/14/2021] [Indexed: 01/23/2023]
Abstract
PURPOSE Accurate brain tumor segmentation on magnetic resonance imaging (MRI) has wide-ranging applications such as radiosurgery planning. Advances in artificial intelligence, especially deep learning (DL), allow development of automatic segmentation that overcome the labor-intensive and operator-dependent manual segmentation. We aimed to evaluate the accuracy of the top-performing DL model from the 2018 Brain Tumor Segmentation (BraTS) challenge, the impact of missing MRI sequences, and whether a model trained on gliomas can accurately segment other brain tumor types. METHODS We trained the model using Medical Decathlon dataset, applied it to the BraTS 2019 glioma dataset, and developed additional models using individual and multimodal MRI sequences. The Dice score was calculated to assess the model's accuracy compared to ground truth labels by neuroradiologists on BraTS dataset. The model was then applied to a local dataset of 105 brain tumors, performance of which was qualitatively evaluated. RESULTS The DL model using pre- and post-gadolinium contrast T1 and T2 FLAIR sequences performed best, with a Dice score 0.878 for whole tumor, 0.732 tumor core, and 0.699 active tumor. Lack of T1 or T2 sequences did not significantly degrade performance, but FLAIR and T1C were important contributors. All segmentations performed by the model in the local dataset, including non-glioma cases, were considered accurate by a pool of specialists. CONCLUSION The DL model could use available MRI sequences to optimize glioma segmentation and adopt transfer learning to segment non-glioma tumors, thereby serving as a useful tool to improve treatment planning and personalized surveillance of patients.
Collapse
|
19
|
Mostafiz R, Uddin MS, Alam NA, Hasan MM, Rahman MM. MRI-based brain tumor detection using the fusion of histogram oriented gradients and neural features. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-020-00550-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
20
|
Chen X, Lian C, Wang L, Deng H, Kuang T, Fung S, Gateno J, Yap PT, Xia JJ, Shen D. Anatomy-Regularized Representation Learning for Cross-Modality Medical Image Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:274-285. [PMID: 32956048 PMCID: PMC8120796 DOI: 10.1109/tmi.2020.3025133] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
An increasing number of studies are leveraging unsupervised cross-modality synthesis to mitigate the limited label problem in training medical image segmentation models. They typically transfer ground truth annotations from a label-rich imaging modality to a label-lacking imaging modality, under an assumption that different modalities share the same anatomical structure information. However, since these methods commonly use voxel/pixel-wise cycle-consistency to regularize the mappings between modalities, high-level semantic information is not necessarily preserved. In this paper, we propose a novel anatomy-regularized representation learning approach for segmentation-oriented cross-modality image synthesis. It learns a common feature encoding across different modalities to form a shared latent space, where 1) the input and its synthesis present consistent anatomical structure information, and 2) the transformation between two images in one domain is preserved by their syntheses in another domain. We applied our method to the tasks of cross-modality skull segmentation and cardiac substructure segmentation. Experimental results demonstrate the superiority of our method in comparison with state-of-the-art cross-modality medical image segmentation methods.
Collapse
|
21
|
Sun J, Peng Y, Guo Y, Li D. Segmentation of the multimodal brain tumor image used the multi-pathway architecture method based on 3D FCN. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.10.031] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
22
|
Mitchell JR, Kamnitsas K, Singleton KW, Whitmire SA, Clark-Swanson KR, Ranjbar S, Rickertsen CR, Johnston SK, Egan KM, Rollison DE, Arrington J, Krecke KN, Passe TJ, Verdoorn JT, Nagelschneider AA, Carr CM, Port JD, Patton A, Campeau NG, Liebo GB, Eckel LJ, Wood CP, Hunt CH, Vibhute P, Nelson KD, Hoxworth JM, Patel AC, Chong BW, Ross JS, Boxerman JL, Vogelbaum MA, Hu LS, Glocker B, Swanson KR. Deep neural network to locate and segment brain tumors outperformed the expert technicians who created the training data. J Med Imaging (Bellingham) 2020; 7:055501. [PMID: 33102623 PMCID: PMC7567400 DOI: 10.1117/1.jmi.7.5.055501] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2020] [Accepted: 09/21/2020] [Indexed: 11/17/2022] Open
Abstract
Purpose: Deep learning (DL) algorithms have shown promising results for brain tumor segmentation in MRI. However, validation is required prior to routine clinical use. We report the first randomized and blinded comparison of DL and trained technician segmentations. Approach: We compiled a multi-institutional database of 741 pretreatment MRI exams. Each contained a postcontrast T1-weighted exam, a T2-weighted fluid-attenuated inversion recovery exam, and at least one technician-derived tumor segmentation. The database included 729 unique patients (470 males and 259 females). Of these exams, 641 were used for training the DL system, and 100 were reserved for testing. We developed a platform to enable qualitative, blinded, controlled assessment of lesion segmentations made by technicians and the DL method. On this platform, 20 neuroradiologists performed 400 side-by-side comparisons of segmentations on 100 test cases. They scored each segmentation between 0 (poor) and 10 (perfect). Agreement between segmentations from technicians and the DL method was also evaluated quantitatively using the Dice coefficient, which produces values between 0 (no overlap) and 1 (perfect overlap). Results: The neuroradiologists gave technician and DL segmentations mean scores of 6.97 and 7.31, respectively (p<0.00007). The DL method achieved a mean Dice coefficient of 0.87 on the test cases. Conclusions: This was the first objective comparison of automated and human segmentation using a blinded controlled assessment study. Our DL system learned to outperform its “human teachers” and produced output that was better, on average, than its training data.
Collapse
Affiliation(s)
- Joseph Ross Mitchell
- H. Lee Moffitt Cancer Center and Research Institute, Department of Machine Learning, Tampa, Florida, United States
| | | | - Kyle W Singleton
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | - Scott A Whitmire
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | | | - Sara Ranjbar
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States
| | | | - Sandra K Johnston
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,University of Washington, Department of Radiology, Seattle, Washington, United States
| | - Kathleen M Egan
- H. Lee Moffitt Cancer Center and Research Institute, Department of Cancer Epidemiology, Tampa, Florida, United States
| | - Dana E Rollison
- H. Lee Moffitt Cancer Center and Research Institute, Department of Cancer Epidemiology, Tampa, Florida, United States
| | - John Arrington
- H. Lee Moffitt Cancer Center and Research Institute, Department of Diagnostic Imaging and Interventional Radiology, Tampa, Florida, United States
| | - Karl N Krecke
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Theodore J Passe
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jared T Verdoorn
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | | | - Carrie M Carr
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - John D Port
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Alice Patton
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Norbert G Campeau
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Greta B Liebo
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Laurence J Eckel
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Christopher P Wood
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Christopher H Hunt
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Prasanna Vibhute
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Kent D Nelson
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Joseph M Hoxworth
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Ameet C Patel
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Brian W Chong
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jeffrey S Ross
- Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Jerrold L Boxerman
- Rhode Island Hospital and Alpert Medical School of Brown University, Department of Diagnostic Imaging, Providence, Rhode Island, United States
| | - Michael A Vogelbaum
- H. Lee Moffitt Cancer Center and Research Institute, Department of Neurosurgery, Tampa, Florida, United States
| | - Leland S Hu
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,Mayo Clinic, Department of Radiology, Rochester, Minnesota, United States
| | - Ben Glocker
- Imperial College, Biomedical Image Analysis Group, London, United Kingdom
| | - Kristin R Swanson
- Mayo Clinic, Mathematical NeuroOncology Lab, Phoenix, Arizona, United States.,Mayo Clinic, Department of Neurosurgery, Phoenix, Arizona, United States
| |
Collapse
|
23
|
Shirly S, Ramesh K. Review on 2D and 3D MRI Image Segmentation Techniques. Curr Med Imaging 2020; 15:150-160. [PMID: 31975661 DOI: 10.2174/1573405613666171123160609] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2017] [Revised: 10/23/2017] [Accepted: 11/14/2017] [Indexed: 11/22/2022]
Abstract
BACKGROUND Magnetic Resonance Imaging is most widely used for early diagnosis of abnormalities in human organs. Due to the technical advancement in digital image processing, automatic computer aided medical image segmentation has been widely used in medical diagnostics. DISCUSSION Image segmentation is an image processing technique which is used for extracting image features, searching and mining the medical image records for better and accurate medical diagnostics. Commonly used segmentation techniques are threshold based image segmentation, clustering based image segmentation, edge based image segmentation, region based image segmentation, atlas based image segmentation, and artificial neural network based image segmentation. CONCLUSION This survey aims at providing an insight about different 2-Dimensional and 3- Dimensional MRI image segmentation techniques and to facilitate better understanding to the people who are new in this field. This comparative study summarizes the benefits and limitations of various segmentation techniques.
Collapse
Affiliation(s)
- S Shirly
- Department of Computer Applications, Anna University Regional-Campus, Tirunelveli, Tamil Nadu, India
| | - K Ramesh
- Department of Computer Applications, Anna University Regional-Campus, Tirunelveli, Tamil Nadu, India
| |
Collapse
|
24
|
Multimodal MRI Brain Tumor Image Segmentation Using Sparse Subspace Clustering Algorithm. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2020; 2020:8620403. [PMID: 32714431 PMCID: PMC7355351 DOI: 10.1155/2020/8620403] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 05/24/2020] [Accepted: 06/08/2020] [Indexed: 11/17/2022]
Abstract
Brain tumors are one of the most deadly diseases with a high mortality rate. The shape and size of the tumor are random during the growth process. Brain tumor segmentation is a brain tumor assisted diagnosis technology that separates different brain tumor structures such as edema and active and tumor necrosis tissues from normal brain tissue. Magnetic resonance imaging (MRI) technology has the advantages of no radiation impact on the human body, good imaging effect on structural tissues, and an ability to realize tomographic imaging of any orientation. Therefore, doctors often use MRI brain tumor images to analyze and process brain tumors. In these images, the tumor structure is only characterized by grayscale changes, and the developed images obtained by different equipment and different conditions may also be different. This makes it difficult for traditional image segmentation methods to deal well with the segmentation of brain tumor images. Considering that the traditional single-mode MRI brain tumor images contain incomplete brain tumor information, it is difficult to segment the single-mode brain tumor images to meet clinical needs. In this paper, a sparse subspace clustering (SSC) algorithm is introduced to process the diagnosis of multimodal MRI brain tumor images. In the absence of added noise, the proposed algorithm has better advantages than traditional methods. Compared with the top 15 in the Brats 2015 competition, the accuracy is not much different, being basically stable between 10 and 15. In order to verify the noise resistance of the proposed algorithm, this paper adds 5%, 10%, 15%, and 20% Gaussian noise to the test image. Experimental results show that the proposed algorithm has better noise immunity than a comparable algorithm.
Collapse
|
25
|
Moawad AW, Fuentes D, Khalaf AM, Blair KJ, Szklaruk J, Qayyum A, Hazle JD, Elsayes KM. Feasibility of Automated Volumetric Assessment of Large Hepatocellular Carcinomas' Responses to Transarterial Chemoembolization. Front Oncol 2020; 10:572. [PMID: 32457831 PMCID: PMC7221016 DOI: 10.3389/fonc.2020.00572] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 03/30/2020] [Indexed: 12/13/2022] Open
Abstract
Background: Hepatocellular carcinoma (HCC) is the most common liver malignancy and the leading cause of death in patients with cirrhosis. Various treatments for HCC are available, including transarterial chemoembolization (TACE), which is the commonest intervention performed in HCC. Radiologic tumor response following TACE is an important prognostic factor for patients with HCC. We hypothesized that, for large HCC tumors, assessment of treatment response made with automated volumetric response evaluation criteria in solid tumors (RECIST) might correlate with the assessment made with the more time- and labor-intensive unidimensional modified RECIST (mRECIST) and manual volumetric RECIST (M-vRECIST) criteria. Accordingly, we undertook this retrospective study to compare automated volumetric RECIST (A-vRECIST) with M-vRECIST and mRESIST for the assessment of large HCC tumors' responses to TACE. Methods:We selected 42 pairs of contrast-enhanced computed tomography (CT) images of large HCCs. Images were taken before and after TACE, and in each of the images, the HCC was segmented using both a manual contouring tool and a convolutional neural network. Three experienced radiologists assessed tumor response to TACE using mRECIST criteria. The intra-class correlation coefficient was used to assess inter-reader reliability in the mRECIST measurements, while the Pearson correlation coefficient was used to assess correlation between the volumetric and mRECIST measurements. Results:Volumetric tumor assessment using automated and manual segmentation tools showed good correlation with mRECIST measurements. For A-vRECIST and M-vRECIST, respectively, r = 0.597 vs. 0.622 in the baseline studies; 0.648 vs. 0.748 in the follow-up studies; and 0.774 vs. 0.766 in the response assessment (P < 0.001 for all). The A-vRECIST evaluation showed high correlation with the M-vRECIST evaluation (r = 0.967, 0.937, and 0.826 in baseline studies, follow-up studies, and response assessment, respectively, P < 0.001 for all). Conclusion:Volumetric RECIST measurements are likely to provide an early marker for TACE monitoring, and automated measurements made with a convolutional neural network may be good substitutes for manual volumetric measurements.
Collapse
Affiliation(s)
- Ahmed W. Moawad
- Imaging Physics Department, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - David Fuentes
- Imaging Physics Department, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Ahmed M. Khalaf
- Diagnostic Radiology Department, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Katherine J. Blair
- Diagnostic Radiology Department, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Janio Szklaruk
- Diagnostic Radiology Department, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Aliya Qayyum
- Diagnostic Radiology Department, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - John D. Hazle
- Imaging Physics Department, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| | - Khaled M. Elsayes
- Diagnostic Radiology Department, University of Texas MD Anderson Cancer Center, Houston, TX, United States
| |
Collapse
|
26
|
Ben Naceur M, Akil M, Saouli R, Kachouri R. Fully automatic brain tumor segmentation with deep learning-based selective attention using overlapping patches and multi-class weighted cross-entropy. Med Image Anal 2020; 63:101692. [PMID: 32417714 DOI: 10.1016/j.media.2020.101692] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 03/18/2020] [Accepted: 03/19/2020] [Indexed: 02/08/2023]
Abstract
In this paper, we present a new Deep Convolutional Neural Networks (CNNs) dedicated to fully automatic segmentation of Glioblastoma brain tumors with high- and low-grade. The proposed CNNs model is inspired by the Occipito-Temporal pathway which has a special function called selective attention that uses different receptive field sizes in successive layers to figure out the crucial objects in a scene. Thus, using selective attention technique to develop the CNNs model, helps to maximize the extraction of relevant features from MRI images. We have also addressed two more issues: class-imbalance, and the spatial relationship among image Patches. To address the first issue, we propose two steps: an equal sampling of images Patches and an experimental analysis of the effect of weighted cross-entropy loss function on the segmentation results. In addition, to overcome the second issue, we have studied the effect of Overlapping Patches against Adjacent Patches where the Overlapping Patches show better segmentation results due to the introduction of the global context as well as the local features of the image Patches compared to the conventionnel Adjacent Patches. Our experiment results are reported on BRATS-2018 dataset where our End-to-End Deep Learning model achieved state-of-the-art performance. The median Dice score of our fully automatic segmentation model is 0.90, 0.83, 0.83 for the whole tumor, tumor core, and enhancing tumor respectively compared to the Dice score of radiologist, that is in the range 74% - 85%. Moreover, our proposed CNNs model is not only computationally efficient at inference time, but it could segment the whole brain on average 12 seconds. Finally, the proposed Deep Learning model provides an accurate and reliable segmentation result, and that makes it suitable for adopting in research and as a part of different clinical settings.
Collapse
Affiliation(s)
- Mostefa Ben Naceur
- Gaspard Monge Computer Science Laboratory, Univ Gustave Eiffel, CNRS, ESIEE Paris, F-77454 Marne-la-Vallée, France; Smart Computer Sciences Laboratory, Computer Sciences Department, Exact.Sc, and SNL, University of Biskra, Algeria.
| | - Mohamed Akil
- Gaspard Monge Computer Science Laboratory, Univ Gustave Eiffel, CNRS, ESIEE Paris, F-77454 Marne-la-Vallée, France.
| | - Rachida Saouli
- Smart Computer Sciences Laboratory, Computer Sciences Department, Exact.Sc, and SNL, University of Biskra, Algeria.
| | - Rostom Kachouri
- Gaspard Monge Computer Science Laboratory, Univ Gustave Eiffel, CNRS, ESIEE Paris, F-77454 Marne-la-Vallée, France.
| |
Collapse
|
27
|
El-Torky DMS, Al-Berry MN, Salem MAM, Roushdy MI. 3D Visualization of Brain Tumors Using MR Images: A Survey. Curr Med Imaging 2020; 15:353-361. [PMID: 31989903 DOI: 10.2174/1573405614666180111142055] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2017] [Revised: 01/02/2018] [Accepted: 01/02/2018] [Indexed: 11/22/2022]
Abstract
BACKGROUND Three-Dimensional visualization of brain tumors is very useful in both diagnosis and treatment stages of brain cancer. DISCUSSION It helps the oncologist/neurosurgeon to take the best decision in Radiotherapy and/or surgical resection techniques. 3D visualization involves two main steps; tumor segmentation and 3D modeling. CONCLUSION In this article, we illustrate the most widely used segmentation and 3D modeling techniques for brain tumors visualization. We also survey the public databases available for evaluation of the mentioned techniques.
Collapse
Affiliation(s)
| | - Maryam Nabil Al-Berry
- Department of Basic Sciences, Faculty of Computers and Information Science, Ain Shams University, Cairo, Egypt
| | - Mohammed Abdel-Megeed Salem
- Department of Basic Sciences, Faculty of Computers and Information Science, Ain Shams University, Cairo, Egypt
| | - Mohamed Ismail Roushdy
- Department of Basic Sciences, Faculty of Computers and Information Science, Ain Shams University, Cairo, Egypt
| |
Collapse
|
28
|
Sreerangappa M, Suresh M, Jayadevappa D. Segmentation of Brain Tumor and Performance Evaluation Using Spatial FCM and Level Set Evolution. Open Biomed Eng J 2019. [DOI: 10.2174/1874120701913010134] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Background:
In recent years, brain tumor is one of the major causes of death in human beings. The survival rate can be increased if the tumor is diagnosed accurately in the early stage. Hence, medical image segmentation is always a challenging task of any problem in computer guided medical procedures in hospitals. The main objective of the segmentation process is to obtain object of interest from the given image so that it can be represented in a meaningful way for further analysis.
Methods:
To improve the segmentation accuracy, an efficient segmentation method which combines a spatial fuzzy c-means and level sets is proposed in this paper.
Results:
The experiment is conducted using brain web and DICOM database. After pre-processing of an MR image, a spatial FCM algorithm is applied. The SFCM utilizes spatial data from the neighbourhood of each pixel to represent clusters. Finally, these clusters are segmented using level set active contour model for the tumor boundary. The performance of the proposed algorithm is evaluated using various performance metrics.
Conclusion:
In this technique, wavelets and spatial FCM are applied before segmenting the brain tumor by level sets. The qualitative results show more accurate detection of tumor boundary and better convergence rate of the contour as compared to other segmentation techniques. The proposed segmentation frame work is also compared with two automatic segmentation techniques developed recently. The quantitative results of the proposed method summarize the improvements in segmentation accuracy, sensitivity and specificity.
Collapse
|
29
|
Three-Dimensional Visualisation of Skeletal Cavities. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2019; 1171:73-83. [PMID: 31823241 DOI: 10.1007/978-3-030-24281-7_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Bones contain spaces within them. The extraction and the analysis of those cavities are crucial in the study of bone tissue function and can inform about pathologies or past traumatic events. The use of medical imaging techniques allows a non-invasive visualisation of skeletal cavities opening a new frontier in medical inspection and diagnosis. Here, we report the application of a new mesh-based approach for the isolation of skeletal cavities of different size and geometrical structure. We apply a mesh-based approach to extract (i) the main virtual cavities inside the human skull, (ii) a complete human endocast, (iii) the inner vasculature of the malleus bone and (iv) the medullary of a human femur. The detailed description of the mesh-based isolation method and its pioneristic application to four different case-studies show the potential of this approach in medical visualisation.
Collapse
|
30
|
|
31
|
A Multi-class Image Classifier for Assisting in Tumor Detection of Brain Using Deep Convolutional Neural Network. ACTA ACUST UNITED AC 2019. [DOI: 10.1007/978-981-13-8969-6_6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2023]
|
32
|
Sun R, Wang K, Guo L, Yang C, Chen J, Ti Y, Sa Y. A potential field segmentation based method for tumor segmentation on multi-parametric MRI of glioma cancer patients. BMC Med Imaging 2019; 19:48. [PMID: 31208349 PMCID: PMC6580466 DOI: 10.1186/s12880-019-0348-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Accepted: 06/09/2019] [Indexed: 01/02/2023] Open
Abstract
Background Accurate segmentation of brain tumors is vital for the gross tumor volume (GTV) definition in radiotherapy. Functional MR images like apparent diffusion constant (ADC) and fractional anisotropy (FA) images can provide more comprehensive information for sensitive detection of the GTV. We synthesize anatomical and functional MRI for accurate and semi-automatic segmentation of GTVs and improvement of clinical efficiency. Methods Four MR image sets including T1-weighted contrast-enhanced (T1C), T2-weighted (T2), apparent diffusion constant (ADC) and fractional anisotropy (FA) images of 5 glioma patients were acquired and registered. A new potential field segmentation (PFS) method was proposed based on the concept of potential field in physics. For T1C, T2 and ADC images, global potential field segmentation (global-PFS) was used on user defined region of interest (ROI) for rough segmentation and then morphologically processed for accurate delineation of the GTV. For FA images, white matter (WM) was removed using local potential field segmentation (local-PFS), and then tumor extent was delineated with region growing and morphological methods. The individual segmentations of multi-parametric images were ensembled into a fused segmentation, considered as final GTV. GTVs were compared with manually delineated ground truth and evaluated with segmentation quality measure (Q), Dice’s similarity coefficient (DSC) and Sensitivity and Specificity. Results Experimental study with the five patients’ data and new method showed that, the mean values of Q, DSC, Sensitivity and Specificity were 0.80 (±0.07), 0.88 (±0.04), 0.92 (±0.01) and 0.88 (±0.05) respectively. The global-PFS used on ROIs of T1C, T2 and ADC images can avoid interferences from skull and other non-tumor areas. Similarity to local-PFS on FA images, it can also reduce the time complexity as compared with the global-PFS on whole image sets. Conclusions Efficient and semi-automatic segmentation of the GTV can be achieved with the new method. Combination of anatomical and functional MR images has the potential to provide new methods and ideas for target definition in radiotherapy.
Collapse
Affiliation(s)
- Ranran Sun
- Department of Biomedical Engineering, Tianjin University, 92 Weijin Road, Tianjin, 300072, China
| | - Keqiang Wang
- Department of Biomedical Engineering, Tianjin University, 92 Weijin Road, Tianjin, 300072, China.,Department of Radiotherapy, Tianjin Medical University General Hospital, Tianjin, 300052, China
| | - Lu Guo
- Department of Biomedical Engineering, Tianjin University, 92 Weijin Road, Tianjin, 300072, China
| | - Chengwen Yang
- Department of Biomedical Engineering, Tianjin University, 92 Weijin Road, Tianjin, 300072, China.,Department of Radiation Oncology, Tianjin Cancer Hospital, Tianjin, 300060, China
| | - Jie Chen
- Department of Radiation Oncology, Tianjin Cancer Hospital, Tianjin, 300060, China
| | - Yalin Ti
- Global Research Organization, GE Healthcare, Shanghai, 201203, China
| | - Yu Sa
- Department of Biomedical Engineering, Tianjin University, 92 Weijin Road, Tianjin, 300072, China.
| |
Collapse
|
33
|
Zulkoffli Z, Shariff TA. Detection of Brain Tumor and Extraction of Features in MRI Images Using K-means Clustering and Morphological Operations. 2019 IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC CONTROL AND INTELLIGENT SYSTEMS (I2CACIS) 2019. [DOI: 10.1109/i2cacis.2019.8825094] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
34
|
A Clinical Support System for Brain Tumor Classification Using Soft Computing Techniques. J Med Syst 2019; 43:144. [DOI: 10.1007/s10916-019-1266-9] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Accepted: 03/28/2019] [Indexed: 10/27/2022]
|
35
|
Towards Reinforced Brain Tumor Segmentation on MRI Images Based on Temperature Changes on Pathologic Area. Int J Biomed Imaging 2019; 2019:1758948. [PMID: 30941165 PMCID: PMC6421017 DOI: 10.1155/2019/1758948] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 01/31/2019] [Accepted: 02/06/2019] [Indexed: 11/17/2022] Open
Abstract
Brain tumor segmentation is the process of separating the tumor from normal brain tissues; in clinical routine, it provides useful information for diagnosis and treatment planning. However, it is still a challenging task due to the irregular form and confusing boundaries of tumors. Tumor cells thermally represent a heat source; their temperature is high compared to normal brain cells. The main aim of the present paper is to demonstrate that thermal information of brain tumors can be used to reduce false positive and false negative results of segmentation performed in MRI images. Pennes bioheat equation was solved numerically using the finite difference method to simulate the temperature distribution in the brain; Gaussian noises of ±2% were added to the simulated temperatures. Canny edge detector was used to detect tumor contours from the calculated thermal map, as the calculated temperature showed a large gradient in tumor contours. The proposed method is compared to Chan–Vese based level set segmentation method applied to T1 contrast-enhanced and Flair MRI images of brains containing tumors with ground truth. The method is tested in four different phantom patients by considering different tumor volumes and locations and 50 synthetic patients taken from BRATS 2012 and BRATS 2013. The obtained results in all patients showed significant improvement using the proposed method compared to segmentation by level set method with an average of 0.8% of the tumor area and 2.48% of healthy tissue was differentiated using thermal images only. We conclude that tumor contours delineation based on tumor temperature changes can be exploited to reinforce and enhance segmentation algorithms in MRI diagnostic.
Collapse
|
36
|
Schlegl T, Seeböck P, Waldstein SM, Langs G, Schmidt-Erfurth U. f-AnoGAN: Fast unsupervised anomaly detection with generative adversarial networks. Med Image Anal 2019; 54:30-44. [PMID: 30831356 DOI: 10.1016/j.media.2019.01.010] [Citation(s) in RCA: 198] [Impact Index Per Article: 39.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2018] [Revised: 11/24/2018] [Accepted: 01/30/2019] [Indexed: 01/11/2023]
Abstract
Obtaining expert labels in clinical imaging is difficult since exhaustive annotation is time-consuming. Furthermore, not all possibly relevant markers may be known and sufficiently well described a priori to even guide annotation. While supervised learning yields good results if expert labeled training data is available, the visual variability, and thus the vocabulary of findings, we can detect and exploit, is limited to the annotated lesions. Here, we present fast AnoGAN (f-AnoGAN), a generative adversarial network (GAN) based unsupervised learning approach capable of identifying anomalous images and image segments, that can serve as imaging biomarker candidates. We build a generative model of healthy training data, and propose and evaluate a fast mapping technique of new data to the GAN's latent space. The mapping is based on a trained encoder, and anomalies are detected via a combined anomaly score based on the building blocks of the trained model - comprising a discriminator feature residual error and an image reconstruction error. In the experiments on optical coherence tomography data, we compare the proposed method with alternative approaches, and provide comprehensive empirical evidence that f-AnoGAN outperforms alternative approaches and yields high anomaly detection accuracy. In addition, a visual Turing test with two retina experts showed that the generated images are indistinguishable from real normal retinal OCT images. The f-AnoGAN code is available at https://github.com/tSchlegl/f-AnoGAN.
Collapse
Affiliation(s)
- Thomas Schlegl
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria; Christian Doppler Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University Vienna, Austria. https://www.github.com/tSchlegl/f-AnoGAN
| | - Philipp Seeböck
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria; Christian Doppler Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University Vienna, Austria
| | - Sebastian M Waldstein
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University Vienna, Austria
| | - Georg Langs
- Computational Imaging Research Lab, Department of Biomedical Imaging and Image-guided Therapy, Medical University of Vienna, Vienna, Austria. http://www.cir.meduniwien.ac.at
| | - Ursula Schmidt-Erfurth
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology and Optometry, Medical University Vienna, Austria
| |
Collapse
|
37
|
Jung F, Kazemifar S, Bartha R, Rajakumar N. Semiautomated Assessment of the Anterior Cingulate Cortex in Alzheimer's Disease. J Neuroimaging 2019; 29:376-382. [PMID: 30640412 DOI: 10.1111/jon.12598] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2018] [Revised: 12/14/2018] [Accepted: 01/07/2019] [Indexed: 10/27/2022] Open
Abstract
BACKGROUND AND PURPOSE The anterior cingulate cortex (ACC) is involved in several cognitive processes including executive function. Degenerative changes of ACC are consistently seen in Alzheimer's disease (AD). However, volumetric changes specific to the ACC in AD are not clear because of the difficulty in segmenting this region. The objectives of the current study were to develop a precise and high-throughput approach for measuring ACC volumes and to correlate the relationship between ACC volume and cognitive function in AD. METHODS Structural T1 -weighted magnetic resonance images of AD patients (n = 47) and age-matched controls (n = 47) at baseline and at 24 months were obtained from the Alzheimer's disease neuroimaging initiative (ADNI) database and studied using a custom-designed semiautomated segmentation protocol. RESULTS ACC volumes obtained using the semiautomated protocol were highly correlated to values obtained from manual segmentation (r = .98) and the semiautomated protocol was considerably faster. When comparing AD and control subjects, no significant differences were observed in baseline ACC volumes or in change in ACC volumes over 24 months using the two segmentation methods. However, a change in ACC volume over 24 months did not correlate with a change in mini-mental state examination scores. CONCLUSIONS Our results indicate that the proposed semiautomated segmentation protocol is reliable for determining ACC volume in neurodegenerative conditions including AD.
Collapse
Affiliation(s)
- Flora Jung
- Department of Physiology, Western University, London, ON, Canada
| | - Samaneh Kazemifar
- Department of Medical Biophysics, Western University, London, ON, Canada
| | - Robert Bartha
- Department of Medical Biophysics, Western University, London, ON, Canada
| | | |
Collapse
|
38
|
Naceur MB, Saouli R, Akil M, Kachouri R. Fully Automatic Brain Tumor Segmentation using End-To-End Incremental Deep Neural Networks in MRI images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 166:39-49. [PMID: 30415717 DOI: 10.1016/j.cmpb.2018.09.007] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 09/16/2018] [Accepted: 09/18/2018] [Indexed: 06/09/2023]
Abstract
BACKGROUND AND OBJECTIVE Nowadays, getting an efficient Brain Tumor Segmentation in Multi-Sequence MR images as soon as possible, gives an early clinical diagnosis, treatment and follow-up. The aim of this study is to develop a new deep learning model for the segmentation of brain tumors. The proposed models are used to segment the brain tumors of Glioblastomas (with both high and low grade). Glioblastomas have four properties: different sizes, shapes, contrasts, in addition, Glioblastomas appear anywhere in the brain. METHODS In this paper, we propose three end-to-end Incremental Deep Convolutional Neural Networks models for fully automatic Brain Tumor Segmentation. Our proposed models are different from the other CNNs-based models that follow the technique of trial and error process which does not use any guided approach to get the suitable hyper-parameters. Moreover, we adopt the technique of Ensemble Learning to design a more efficient model. For solving the problem of training CNNs model, we propose a new training strategy which takes into account the most influencing hyper-parameters by bounding and setting a roof to these hyper-parameters to accelerate the training. RESULTS Our experiment results reported on BRATS-2017 dataset. The proposed deep learning models achieve the state-of-the-art performance without any post-processing operations. Indeed, our models achieve in average 0.88 Dice score over the complete region. Moreover, the efficient design with the advantage of GPU implementation, allows our three deep learning models to achieve brain segmentation results in average 20.87 s. CONCLUSIONS The proposed deep learning models are effective for the segmentation of brain tumors and allow to obtain high accurate results. Moreover, the proposed models could help the physician experts to reduce the time of diagnostic.
Collapse
Affiliation(s)
- Mostefa Ben Naceur
- Smart Computer Sciences Laboratory, Department of Computer Sciences, University of Biskra, Biskra, Algeria; Gaspard Monge Computer Science Laboratory, ESIEE-Paris, University Paris-Est Marne-la-Vallée, France.
| | - Rachida Saouli
- Smart Computer Sciences Laboratory, Department of Computer Sciences, University of Biskra, Biskra, Algeria.
| | - Mohamed Akil
- Gaspard Monge Computer Science Laboratory, ESIEE-Paris, University Paris-Est Marne-la-Vallée, France.
| | - Rostom Kachouri
- Gaspard Monge Computer Science Laboratory, ESIEE-Paris, University Paris-Est Marne-la-Vallée, France.
| |
Collapse
|
39
|
Ibrahim RW, Hasan AM, Jalab HA. A new deformable model based on fractional Wright energy function for tumor segmentation of volumetric brain MRI scans. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 163:21-28. [PMID: 30119853 DOI: 10.1016/j.cmpb.2018.05.031] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/11/2018] [Revised: 05/17/2018] [Accepted: 05/24/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVES The MRI brain tumors segmentation is challenging due to variations in terms of size, shape, location and features' intensity of the tumor. Active contour has been applied in MRI scan image segmentation due to its ability to produce regions with boundaries. The main difficulty that encounters the active contour segmentation is the boundary tracking which is controlled by minimization of energy function for segmentation. Hence, this study proposes a novel fractional Wright function (FWF) as a minimization of energy technique to improve the performance of active contour without edge method. METHOD In this study, we implement FWF as an energy minimization function to replace the standard gradient-descent method as minimization function in Chan-Vese segmentation technique. The proposed FWF is used to find the boundaries of an object by controlling the inside and outside values of the contour. In this study, the objective evaluation is used to distinguish the differences between the processed segmented images and ground truth using a set of statistical parameters; true positive, true negative, false positive, and false negative. RESULTS The FWF as a minimization of energy was successfully implemented on BRATS 2013 image dataset. The achieved overall average sensitivity score of the brain tumors segmentation was 94.8 ± 4.7%. CONCLUSIONS The results demonstrate that the proposed FWF method minimized the energy function more than the gradient-decent method that was used in the original three-dimensional active contour without edge (3DACWE) method.
Collapse
Affiliation(s)
- Rabha W Ibrahim
- Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia.
| | - Ali M Hasan
- College of Medicine, Al-Nahrain University, Baghdad, Iraq.
| | - Hamid A Jalab
- Faculty of Computer Science and Information Technology, University of Malaya, Kuala Lumpur 50603, Malaysia.
| |
Collapse
|
40
|
A New Optimized Thresholding Method Using Ant Colony Algorithm for MR Brain Image Segmentation. J Digit Imaging 2018; 32:162-174. [PMID: 30091112 DOI: 10.1007/s10278-018-0111-x] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
Abstract
Image segmentation is considered as one of the most fundamental tasks in image processing applications. Segmentation of magnetic resonance (MR) brain images is also an important pre-processing step, since many neural disorders are associated with brain's volume changes. As a result, brain image segmentation can be considered as an essential measure toward automated diagnosis or interpretation of regions of interest, which can help surgical planning, analyzing changes of brain's volume in different tissue types, and identifying neural disorders. In many neural disorders such as Alzheimer and epilepsy, determining the volume of different brain tissues (i.e., white matter, gray matter, and cerebrospinal fluids) has been proven to be effective in quantifying diseases. A traditional way for segmenting brain images involves the use of a medical expert's experience in manually determining the boundary of different regions of interest in brain images. It may seem that manual segmentation of MR brain images by an expert is the first and the best choice. However, this method is proved to be time-consuming and challenging. Hence, numerous MR brain image segmentation methods with different degrees of complexity and accuracy have been introduced recently. Our work proposes an optimized thresholding method for segmentation of MR brain images using biologically inspired ant colony algorithm. In this proposed algorithm, textural features are adopted as heuristic information. Besides, post-processing image enhancement based on homogeneity is also performed to achieve a better performance. The empirical results on axial T1-weighted MR brain images have demonstrated competitive accuracy to traditional meta-heuristic methods, K-means, and expectation maximization.
Collapse
|
41
|
Roy S, Maji P. An accurate and robust skull stripping method for 3-D magnetic resonance brain images. Magn Reson Imaging 2018; 54:46-57. [PMID: 30076947 DOI: 10.1016/j.mri.2018.07.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2018] [Revised: 07/03/2018] [Accepted: 07/27/2018] [Indexed: 01/18/2023]
Abstract
Segmentation of brain region from an MR volume is an essential prerequisite for any automatic medical image processing application as it increases both speed and accuracy of the diagnosis in manifold. Due to material heterogeneity and resolution limitation of imaging devices, the MR image introduces graded intensity of tissues within the brain region. Moreover, it incurs the blurring effect at the brain surface. In spite of these artifacts, all the tissues of brain region of an MR image are perceived to be hanged together within the brain. In this regard, this paper introduces an accurate and robust skull stripping algorithm, termed as ARoSi. It is based on a novel concept, called rough-fuzzy connectedness, introduced in this paper. In the proposed method, the connectedness of a voxel to the brain region is determined by its degree of belongingness to the brain region as well as the degree of adjacency to the brain. Moreover, the proposed ARoSi algorithm considers the local spatial information of the voxel of interest, which reduces the effect of noise, and in turn, helps to improve the performance of the proposed method. Finally, the performance of the proposed ARoSi algorithm, along with a comparison with other state-of-the-art algorithms, is demonstrated on T1-weighted 3-D brain MR volumes obtained from four different data sets. The experiments show that the performance of ARoSi is consistent across all the four data sets, including diseased data sets. The proposed algorithm achieves the highest mean Dice coefficient of value 0.951 for all the volumes of four different data sets, among six existing brain extraction methods.
Collapse
Affiliation(s)
- Shaswati Roy
- Biomedical Imaging and Bioinformatics Lab, Machine Intelligence Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata 700108, West Bengal, India
| | - Pradipta Maji
- Biomedical Imaging and Bioinformatics Lab, Machine Intelligence Unit, Indian Statistical Institute, 203 B. T. Road, Kolkata 700108, West Bengal, India.
| |
Collapse
|
42
|
Binaghi E, Pedoia V, Balbi S. Meningioma and peritumoral edema segmentation of preoperative MRI brain scans. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2018. [DOI: 10.1080/21681163.2016.1250108] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Affiliation(s)
- Elisabetta Binaghi
- Dipartimento di Scienze Teoriche e Applicate, Università degli Studi dell’Insubria, Varese, Italy
| | - Valentina Pedoia
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, CA, USA
| | - Sergio Balbi
- Dipartimento di Biotecnologie e Scienze della Vita, Università degli Studi dell’Insubria, Varese, Italy
| |
Collapse
|
43
|
Albi A, Meola A, Zhang F, Kahali P, Rigolo L, Tax CMW, Ciris PA, Essayed WI, Unadkat P, Norton I, Rathi Y, Olubiyi O, Golby AJ, O'Donnell LJ. Image Registration to Compensate for EPI Distortion in Patients with Brain Tumors: An Evaluation of Tract-Specific Effects. J Neuroimaging 2018; 28:173-182. [PMID: 29319208 PMCID: PMC5844838 DOI: 10.1111/jon.12485] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2017] [Revised: 10/07/2017] [Accepted: 10/23/2017] [Indexed: 12/21/2022] Open
Abstract
BACKGROUND AND PURPOSE Diffusion magnetic resonance imaging (dMRI) provides preoperative maps of neurosurgical patients' white matter tracts, but these maps suffer from echo-planar imaging (EPI) distortions caused by magnetic field inhomogeneities. In clinical neurosurgical planning, these distortions are generally not corrected and thus contribute to the uncertainty of fiber tracking. Multiple image processing pipelines have been proposed for image-registration-based EPI distortion correction in healthy subjects. In this article, we perform the first comparison of such pipelines in neurosurgical patient data. METHODS Five pipelines were tested in a retrospective clinical dMRI dataset of 9 patients with brain tumors. Pipelines differed in the choice of fixed and moving images and the similarity metric for image registration. Distortions were measured in two important tracts for neurosurgery, the arcuate fasciculus and corticospinal tracts. RESULTS Significant differences in distortion estimates were found across processing pipelines. The most successful pipeline used dMRI baseline and T2-weighted images as inputs for distortion correction. This pipeline gave the most consistent distortion estimates across image resolutions and brain hemispheres. CONCLUSIONS Quantitative results of mean tract distortions on the order of 1-2 mm are in line with other recent studies, supporting the potential need for distortion correction in neurosurgical planning. Novel results include significantly higher distortion estimates in the tumor hemisphere and greater effect of image resolution choice on results in the tumor hemisphere. Overall, this study demonstrates possible pitfalls and indicates that care should be taken when implementing EPI distortion correction in clinical settings.
Collapse
Affiliation(s)
- Angela Albi
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Center for Mind/Brain Sciences (CIMEC), University of Trento, Rovereto, Italy
| | - Antonio Meola
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Fan Zhang
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Pegah Kahali
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Laura Rigolo
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Chantal M W Tax
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Image Sciences Institute, University Medical Center Utrecht, Utrecht, Netherlands
| | - Pelin Aksit Ciris
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
- Department of Biomedical Engineering, Akdeniz University, Antalya, Turkey
| | - Walid I Essayed
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Prashin Unadkat
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Isaiah Norton
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Yogesh Rathi
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | - Olutayo Olubiyi
- Brigham and Women's Hospital, Harvard Medical School, Boston, MA
| | | | | |
Collapse
|
44
|
Ben Abdallah M, Blonski M, Wantz-Mézières S, Gaudeau Y, Taillandier L, Moureaux JM. Relevance of two manual tumour volume estimation methods for diffuse low-grade gliomas. Healthc Technol Lett 2018. [PMID: 29515811 PMCID: PMC5830888 DOI: 10.1049/htl.2017.0013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
Management of diffuse low-grade glioma (DLGG) relies extensively on tumour volume estimation from MRI datasets. Two methods are currently clinically used to define this volume: the commonly used three-diameters solution and the more rarely used software-based volume reconstruction from the manual segmentations approach. The authors conducted an initial study of inter-practitioners’ variability of software-based manual segmentations on DLGGs MRI datasets. A panel of 13 experts from various specialties and years of experience delineated 12 DLGGs’ MRI scans. A statistical analysis on the segmented tumour volumes and pixels indicated that the individual practitioner, the years of experience and the specialty seem to have no significant impact on the segmentation of DLGGs. This is an interesting result as it had not yet been demonstrated and as it encourages cross-disciplinary collaboration. Their second study was with the three-diameters method, investigating its impact and that of the software-based volume reconstruction from manual segmentations method on tumour volume. They relied on the same dataset and on a participant from the first study. They compared the average of tumour volumes acquired by software reconstruction from manual segmentations method with tumour volumes obtained with the three-diameters method. The authors found that there is no statistically significant difference between the volumes estimated with the two approaches. These results correspond to non-operated and easily delineable DLGGs and are particularly interesting for time-consuming CUBE MRIs. Nonetheless, the three-diameters method has limitations in estimating tumour volumes for resected DLGGs, for which case the software-based manual segmentation method becomes more appropriate.
Collapse
Affiliation(s)
| | - Marie Blonski
- Centre de Recherche en Automatique de Nancy (CRAN), Nancy, France.,Neuro-Oncology Unit, Nancy University Hospital, Nancy, France
| | | | - Yann Gaudeau
- Centre de Recherche en Automatique de Nancy (CRAN), Nancy, France.,Université de Strasbourg, Strasbourg, France
| | - Luc Taillandier
- Centre de Recherche en Automatique de Nancy (CRAN), Nancy, France.,Neuro-Oncology Unit, Nancy University Hospital, Nancy, France
| | | |
Collapse
|
45
|
Automatic Brain Tumor Segmentation Using Cascaded Anisotropic Convolutional Neural Networks. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES 2018. [DOI: 10.1007/978-3-319-75238-9_16] [Citation(s) in RCA: 183] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
46
|
Noninvasive Grading of Glioma Tumor Using Magnetic Resonance Imaging with Convolutional Neural Networks. APPLIED SCIENCES-BASEL 2017. [DOI: 10.3390/app8010027] [Citation(s) in RCA: 63] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
47
|
Ural B. A Computer-Based Brain Tumor Detection Approach with Advanced Image Processing and Probabilistic Neural Network Methods. J Med Biol Eng 2017. [DOI: 10.1007/s40846-017-0353-y] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
48
|
Velasco-Annis C, Akhondi-Asl A, Stamm A, Warfield SK. Reproducibility of Brain MRI Segmentation Algorithms: Empirical Comparison of Local MAP PSTAPLE, FreeSurfer, and FSL-FIRST. J Neuroimaging 2017; 28:162-172. [PMID: 29134725 DOI: 10.1111/jon.12483] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2017] [Revised: 10/06/2017] [Accepted: 10/16/2017] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND AND PURPOSE Segmentation of human brain structures is crucial for the volumetric quantification of brain disease. Advances in algorithmic approaches have led to automated techniques that save time compared to interactive methods. Recently, the utility and accuracy of template library fusion algorithms, such as Local MAP PSTAPLE (PSTAPLE), have been demonstrated but there is little guidance regarding its reproducibility compared to single template-based algorithms such as FreeSurfer and FSL-FIRST. METHODS Eight repeated magnetic resonance imagings of 20 subjects were segmented using FreeSurfer, FSL-FIRST, and PSTAPLE. We reported the reproducibility of segmentation-derived volume measurements for brain structures and calculated sample size estimates for detecting hypothetical rates of tissue atrophy given the observed variances. RESULTS PSTAPLE had the most reproducible volume measurements for hippocampus, putamen, thalamus, caudate, pallidum, amygdala, Accumbens area, and cortical regions. FreeSurfer was most reproducible for brainstem. PSTAPLE was the most accurate algorithm in terms of several metrics include Dice's coefficient. The sample size estimates showed that a study utilizing PSTAPLE would require tens to hundreds less subjects than the other algorithms for detecting atrophy rates typically observed in brain disease. CONCLUSIONS PSTAPLE is a useful tool for automatic human brain segmentation due to its precision and accuracy, which enable the detection of the size of the effect typically reported for neurological disorders with a substantially reduced sample size, in comparison to the other tools we assessed. This enables randomized controlled trials to be executed with reduced cost and duration, in turn, facilitating the assessment of new therapeutic interventions.
Collapse
Affiliation(s)
- Clemente Velasco-Annis
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, and Harvard Medical School, Boston, MA
| | - Alireza Akhondi-Asl
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, and Harvard Medical School, Boston, MA
| | - Aymeric Stamm
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, and Harvard Medical School, Boston, MA
| | - Simon K Warfield
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital, and Harvard Medical School, Boston, MA
| |
Collapse
|
49
|
|
50
|
Computer-based radiological longitudinal evaluation of meningiomas following stereotactic radiosurgery. Int J Comput Assist Radiol Surg 2017; 13:215-228. [PMID: 29032421 DOI: 10.1007/s11548-017-1673-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Accepted: 10/01/2017] [Indexed: 10/18/2022]
Abstract
PURPOSE Stereotactic radiosurgery (SRS) is a common treatment for intracranial meningiomas. SRS is planned on a pre-therapy gadolinium-enhanced T1-weighted MRI scan (Gd-T1w MRI) in which the meningioma contours have been delineated. Post-SRS therapy serial Gd-T1w MRI scans are then acquired for longitudinal treatment evaluation. Accurate tumor volume change quantification is required for treatment efficacy evaluation and for treatment continuation. METHOD We present a new algorithm for the automatic segmentation and volumetric assessment of meningioma in post-therapy Gd-T1w MRI scans. The inputs are the pre- and post-therapy Gd-T1w MRI scans and the meningioma delineation in the pre-therapy scan. The output is the meningioma delineations and volumes in the post-therapy scan. The algorithm uses the pre-therapy scan and its meningioma delineation to initialize an extended Chan-Vese active contour method and as a strong patient-specific intensity and shape prior for the post-therapy scan meningioma segmentation. The algorithm is automatic, obviates the need for independent tumor localization and segmentation initialization, and incorporates the same tumor delineation criteria in both the pre- and post-therapy scans. RESULTS Our experimental results on retrospective pre- and post-therapy scans with a total of 32 meningiomas with volume ranges 0.4-26.5 cm[Formula: see text] yield a Dice coefficient of [Formula: see text]% with respect to ground-truth delineations in post-therapy scans created by two clinicians. These results indicate a high correspondence to the ground-truth delineations. CONCLUSION Our algorithm yields more reliable and accurate tumor volume change measurements than other stand-alone segmentation methods. It may be a useful tool for quantitative meningioma prognosis evaluation after SRS.
Collapse
|