1
|
Nishimaki K, Onda K, Ikuta K, Chotiyanonta J, Uchida Y, Mori S, Iyatomi H, Oishi K. OpenMAP-T1: A Rapid Deep-Learning Approach to Parcellate 280 Anatomical Regions to Cover the Whole Brain. Hum Brain Mapp 2024; 45:e70063. [PMID: 39523990 PMCID: PMC11551626 DOI: 10.1002/hbm.70063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Revised: 10/10/2024] [Accepted: 10/15/2024] [Indexed: 11/16/2024] Open
Abstract
This study introduces OpenMAP-T1, a deep-learning-based method for rapid and accurate whole-brain parcellation in T1- weighted brain MRI, which aims to overcome the limitations of conventional normalization-to-atlas-based approaches and multi-atlas label-fusion (MALF) techniques. Brain image parcellation is a fundamental process in neuroscientific and clinical research, enabling a detailed analysis of specific cerebral regions. Normalization-to-atlas-based methods have been employed for this task, but they face limitations due to variations in brain morphology, especially in pathological conditions. The MALF techniques improved the accuracy of the image parcellation and robustness to variations in brain morphology, but at the cost of high computational demand that requires a lengthy processing time. OpenMAP-T1 integrates several convolutional neural network models across six phases: preprocessing; cropping; skull-stripping; parcellation; hemisphere segmentation; and final merging. This process involves standardizing MRI images, isolating the brain tissue, and parcellating it into 280 anatomical structures that cover the whole brain, including detailed gray and white matter structures, while simplifying the parcellation processes and incorporating robust training to handle various scan types and conditions. The OpenMAP-T1 was validated on the Johns Hopkins University atlas library and eight available open resources, including real-world clinical images, and the demonstration of robustness across different datasets with variations in scanner types, magnetic field strengths, and image processing techniques, such as defacing. Compared with existing methods, OpenMAP-T1 significantly reduced the processing time per image from several hours to less than 90 s without compromising accuracy. It was particularly effective in handling images with intensity inhomogeneity and varying head positions, conditions commonly seen in clinical settings. The adaptability of OpenMAP-T1 to a wide range of MRI datasets and its robustness to various scan conditions highlight its potential as a versatile tool in neuroimaging.
Collapse
Affiliation(s)
- Kei Nishimaki
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
- Department of Applied Informatics, Graduate School of Science and EngineeringHosei UniversityTokyoJapan
| | - Kengo Onda
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Kumpei Ikuta
- Department of Applied Informatics, Graduate School of Science and EngineeringHosei UniversityTokyoJapan
| | - Jill Chotiyanonta
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Yuto Uchida
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Susumu Mori
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
| | - Hitoshi Iyatomi
- Department of Applied Informatics, Graduate School of Science and EngineeringHosei UniversityTokyoJapan
| | - Kenichi Oishi
- The Russell H. Morgan Department of Radiology and Radiological ScienceThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
- The Richman Family Precision Medicine Center of Excellence in Alzheimer's DiseaseJohns Hopkins University School of MedicineBaltimoreMarylandUSA
- Department of NeurologyThe Johns Hopkins University School of MedicineBaltimoreMarylandUSA
| | | | | |
Collapse
|
2
|
Li L, Zhang S, Wang H, Zhang F, Dong B, Yang J, Liu X. Multi-scale modeling to investigate the effects of transcranial magnetic stimulation on morphologically-realistic neuron with depression. Cogn Neurodyn 2024; 18:3139-3156. [PMID: 39555260 PMCID: PMC11564609 DOI: 10.1007/s11571-024-10142-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 05/05/2024] [Accepted: 06/05/2024] [Indexed: 11/19/2024] Open
Abstract
Transcranial magnetic stimulation (TMS) is a non-invasive neuromodulation technique to activate or inhibit the activity of neurons, and thereby regulate their excitability. This technique has demonstrated potential in the treatment of neuropsychiatric disorders, such as depression. However, the effect of TMS on neurons with different severity of depression is still unclear, limiting the development of efficient and personalized clinical application parameters. In this study, a multi-scale computational model was developed to investigate and quantify the differences in neuronal responses to TMS with different degrees of depression. The microscale neuronal models we constructed represent the hippocampal CA1 region in rats under normal conditions and with varying severities of depression (mild, moderate, and major depressive disorder). These models were then coupled to a macroscopic TMS-induced E-Fields model of a rat head comprising multiple types of tissue. Our results demonstrate alterations in neuronal membrane potential and calcium concentration across varying levels of depression severity. As depression severity increases, the peak membrane potential and polarization degree of neuronal soma and dendrites gradually decline, while the peak calcium concentration decreases and the peak arrival time prolongs. Concurrently, the electric fields thresholds and amplification coefficient gradually rise, indicating an increasing difficulty in activating neurons with depression. This study offers novel insights into the mechanisms of magnetic stimulation in depression treatment using multi-scale computational models. It underscores the importance of considering depression severity in treatment strategies, promising to optimize TMS therapeutic approaches.
Collapse
Affiliation(s)
- Licong Li
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, China
- College of Electronic Information Engineering, Hebei University, Baoding, China
| | - Shuaiyang Zhang
- College of Electronic Information Engineering, Hebei University, Baoding, China
| | - Hongbo Wang
- College of Electronic Information Engineering, Hebei University, Baoding, China
| | - Fukuan Zhang
- College of Electronic Information Engineering, Hebei University, Baoding, China
| | - Bin Dong
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, China
- College of Electronic Information Engineering, Hebei University, Baoding, China
- Affiliated Hospital of Hebei University, Baoding, China
| | - Jianli Yang
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, China
- College of Electronic Information Engineering, Hebei University, Baoding, China
| | - Xiuling Liu
- Key Laboratory of Digital Medical Engineering of Hebei Province, Hebei University, Baoding, China
- College of Electronic Information Engineering, Hebei University, Baoding, China
| |
Collapse
|
3
|
Singh S, Singh R, Kumar S, Suri A. A Narrative Review on 3-Dimensional Visualization Techniques in Neurosurgical Education, Simulation, and Planning. World Neurosurg 2024; 187:46-64. [PMID: 38580090 DOI: 10.1016/j.wneu.2024.03.134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2023] [Revised: 03/22/2024] [Accepted: 03/23/2024] [Indexed: 04/07/2024]
Abstract
BACKGROUND High-fidelity visualization of anatomical organs is crucial for neurosurgical education, simulation, and planning. This becomes much more important for minimally invasive neurosurgical procedures. Realistic anatomical visualization can allow resident surgeons to learn visual cues and orient themselves with the complex 3-dimensional (3D) anatomy. Achieving full fidelity in 3D medical visualization is an active area of research; however, the prior reviews focus on the application area and lack the underlying technical principles. Accordingly, the present study attempts to bridge this gap by providing a narrative review of the techniques used for 3D visualization. METHODS We conducted a literature review on 3D medical visualization technology from 2018 to 2023 using the PubMed and Google Scholar search engines. The cross-referenced manuscripts were extensively studied to find literature that discusses technology relevant to 3D medical visualization. We also compiled and ran software applications that were accessible to us in order to better understand them. RESULTS We present the underlying fundamental technology used in 3D medical visualization in the context of neurosurgical education, simulation, and planning. Further, we discuss and categorize a few important applications based on the 3D visualization techniques they use. CONCLUSIONS The visualization of virtual human organs has not yet achieved a level of realism close to reality. This gap is largely due to the interdisciplinary nature of this research, population diversity, and validation complexities. With the advancements in computational resources and automation of 3D visualization pipelines, next-gen applications may offer enhanced medical 3D visualization fidelity.
Collapse
Affiliation(s)
- Sukhraj Singh
- Amar Nath and Shashi Khosla School of Information Technology, Indian Institute of Technology Delhi, New Delhi, India.
| | - Ramandeep Singh
- Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi, India.
| | - Subodh Kumar
- Department of Computer Science and Engineering, Indian Institute of Technology Delhi, New Delhi, India.
| | - Ashish Suri
- Department of Neurosurgery, All India Institute of Medical Sciences, New Delhi, India.
| |
Collapse
|
4
|
Sugino T, Kin T, Saito N, Nakajima Y. Improved segmentation of basal ganglia from MR images using convolutional neural network with crossover-typed skip connection. Int J Comput Assist Radiol Surg 2024; 19:433-442. [PMID: 37982960 DOI: 10.1007/s11548-023-03015-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 08/29/2023] [Indexed: 11/21/2023]
Abstract
PURPOSE Accurate and automatic segmentation of basal ganglia from magnetic resonance (MR) images is important for diagnosis and treatment of various brain disorders. However, the basal ganglia segmentation is a challenging task because of the class imbalance and the unclear boundaries among basal ganglia anatomical structures. Thus, we aim to present an encoder-decoder convolutional neural network (CNN)-based method for improved segmentation of basal ganglia by focusing on skip connections that determine the segmentation performance of encoder-decoder CNNs. We also aim to reveal the effect of skip connections on the segmentation of basal ganglia with unclear boundaries. METHODS We used the encoder-decoder CNNs with the following five patterns of skip connections: without skip connection, with full-resolution horizontal skip connection, with horizontal skip connections, with vertical skip connections, and with crossover-typed skip connections (the proposed method). We compared and evaluated the performance of the CNNs in the experiment of basal ganglia segmentation using T1-weighted MR brain images of 79 patients. RESULTS The experimental results showed that the skip connections at each scale level help CNNs to acquire multi-scale image features, the vertical skip connections contribute on acquiring finer image features for segmentation of smaller anatomical structures with more blurred boundaries, and the crossover-typed skip connections, a combination of horizontal and vertical skip connections, provided better segmentation accuracy. CONCLUSION This paper investigated the effect of skip connections on the basal ganglia segmentation and revealed the crossover-typed skip connections might be effective for improving the segmentation of basal ganglia with the class imbalance and the unclear boundaries.
Collapse
Affiliation(s)
- Takaaki Sugino
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo, Japan.
| | - Taichi Kin
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Nobuhito Saito
- Department of Neurosurgery, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan
| | - Yoshikazu Nakajima
- Department of Biomedical Informatics, Institute of Biomaterials and Bioengineering, Tokyo Medical and Dental University, Tokyo, Japan
| |
Collapse
|
5
|
Kaur A, Kaur L, Singh A. GA-UNet: UNet-based framework for segmentation of 2D and 3D medical images applicable on heterogeneous datasets. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06134-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
6
|
Multi-Scale Squeeze U-SegNet with Multi Global Attention for Brain MRI Segmentation. SENSORS 2021; 21:s21103363. [PMID: 34066042 PMCID: PMC8151599 DOI: 10.3390/s21103363] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Revised: 05/03/2021] [Accepted: 05/10/2021] [Indexed: 11/27/2022]
Abstract
In this paper, we propose a multi-scale feature extraction with novel attention-based convolutional learning using the U-SegNet architecture to achieve segmentation of brain tissue from a magnetic resonance image (MRI). Although convolutional neural networks (CNNs) show enormous growth in medical image segmentation, there are some drawbacks with the conventional CNN models. In particular, the conventional use of encoder-decoder approaches leads to the extraction of similar low-level features multiple times, causing redundant use of information. Moreover, due to inefficient modeling of long-range dependencies, each semantic class is likely to be associated with non-accurate discriminative feature representations, resulting in low accuracy of segmentation. The proposed global attention module refines the feature extraction and improves the representational power of the convolutional neural network. Moreover, the attention-based multi-scale fusion strategy can integrate local features with their corresponding global dependencies. The integration of fire modules in both the encoder and decoder paths can significantly reduce the computational complexity owing to fewer model parameters. The proposed method was evaluated on publicly accessible datasets for brain tissue segmentation. The experimental results show that our proposed model achieves segmentation accuracies of 94.81% for cerebrospinal fluid (CSF), 95.54% for gray matter (GM), and 96.33% for white matter (WM) with a noticeably reduced number of learnable parameters. Our study shows better segmentation performance, improving the prediction accuracy by 2.5% in terms of dice similarity index while achieving a 4.5 times reduction in the number of learnable parameters compared to previously developed U-SegNet based segmentation approaches. This demonstrates that the proposed approach can achieve reliable and precise automatic segmentation of brain MRI images.
Collapse
|
7
|
Abstract
Even though convolutional neural networks (CNNs) are driving progress in medical image segmentation, standard models still have some drawbacks. First, the use of multi-scale approaches, i.e., encoder-decoder architectures, leads to a redundant use of information, where similar low-level features are extracted multiple times at multiple scales. Second, long-range feature dependencies are not efficiently modeled, resulting in non-optimal discriminative feature representations associated with each semantic class. In this paper we attempt to overcome these limitations with the proposed architecture, by capturing richer contextual dependencies based on the use of guided self-attention mechanisms. This approach is able to integrate local features with their corresponding global dependencies, as well as highlight interdependent channel maps in an adaptive manner. Further, the additional loss between different modules guides the attention mechanisms to neglect irrelevant information and focus on more discriminant regions of the image by emphasizing relevant feature associations. We evaluate the proposed model in the context of semantic segmentation on three different datasets: abdominal organs, cardiovascular structures and brain tumors. A series of ablation experiments support the importance of these attention modules in the proposed architecture. In addition, compared to other state-of-the-art segmentation networks our model yields better segmentation performance, increasing the accuracy of the predictions while reducing the standard deviation. This demonstrates the efficiency of our approach to generate precise and reliable automatic segmentations of medical images. Our code is made publicly available at: https://github.com/sinAshish/Multi-Scale-Attention.
Collapse
|
8
|
Ermiş E, Jungo A, Poel R, Blatti-Moreno M, Meier R, Knecht U, Aebersold DM, Fix MK, Manser P, Reyes M, Herrmann E. Fully automated brain resection cavity delineation for radiation target volume definition in glioblastoma patients using deep learning. Radiat Oncol 2020; 15:100. [PMID: 32375839 PMCID: PMC7204033 DOI: 10.1186/s13014-020-01553-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 04/27/2020] [Indexed: 11/23/2022] Open
Abstract
Background Automated brain tumor segmentation methods are computational algorithms that yield tumor delineation from, in this case, multimodal magnetic resonance imaging (MRI). We present an automated segmentation method and its results for resection cavity (RC) in glioblastoma multiforme (GBM) patients using deep learning (DL) technologies. Methods Post-operative, T1w with and without contrast, T2w and fluid attenuated inversion recovery MRI studies of 30 GBM patients were included. Three radiation oncologists manually delineated the RC to obtain a reference segmentation. We developed a DL cavity segmentation method, which utilizes all four MRI sequences and the reference segmentation to learn to perform RC delineations. We evaluated the segmentation method in terms of Dice coefficient (DC) and estimated volume measurements. Results Median DC of the three radiation oncologist were 0.85 (interquartile range [IQR]: 0.08), 0.84 (IQR: 0.07), and 0.86 (IQR: 0.07). The results of the automatic segmentation compared to the three different raters were 0.83 (IQR: 0.14), 0.81 (IQR: 0.12), and 0.81 (IQR: 0.13) which was significantly lower compared to the DC among raters (chi-square = 11.63, p = 0.04). We did not detect a statistically significant difference of the measured RC volumes for the different raters and the automated method (Kruskal-Wallis test: chi-square = 1.46, p = 0.69). The main sources of error were due to signal inhomogeneity and similar intensity patterns between cavity and brain tissues. Conclusions The proposed DL approach yields promising results for automated RC segmentation in this proof of concept study. Compared to human experts, the DC are still subpar.
Collapse
Affiliation(s)
- Ekin Ermiş
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Alain Jungo
- Insel Data Science Center, Inselspital, Bern University Hospital, Bern, Switzerland.,ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland
| | - Robert Poel
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Marcela Blatti-Moreno
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Raphael Meier
- Institute for Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Urspeter Knecht
- Institute for Diagnostic and Interventional Neuroradiology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Daniel M Aebersold
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Michael K Fix
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Peter Manser
- Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern, Switzerland
| | - Mauricio Reyes
- Insel Data Science Center, Inselspital, Bern University Hospital, Bern, Switzerland.,ARTORG Center for Biomedical Research, University of Bern, Bern, Switzerland
| | - Evelyn Herrmann
- Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Freiburgstrasse, 3010, Bern, Switzerland.
| |
Collapse
|
9
|
Safavian N, Batouli SAH, Oghabian MA. An automatic level set method for hippocampus segmentation in MR images. COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING: IMAGING & VISUALIZATION 2019. [DOI: 10.1080/21681163.2019.1706054] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Nazanin Safavian
- Neuroimaging and Analysis Group (NIAG), Tehran University of Medical Sciences, Tehran, Iran
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Seyed Amir Hossein Batouli
- Neuroimaging and Analysis Group (NIAG), Tehran University of Medical Sciences, Tehran, Iran
- Department of Neuroscience and Addiction Studies, School of Advanced Technologies in Medicine, Tehran University of Medical Sciences, Tehran, Iran
| | - Mohammad Ali Oghabian
- Neuroimaging and Analysis Group (NIAG), Tehran University of Medical Sciences, Tehran, Iran
- Medical Physics and Biomedical Engineering Department, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
10
|
Mostapha M, Styner M. Role of deep learning in infant brain MRI analysis. Magn Reson Imaging 2019; 64:171-189. [PMID: 31229667 PMCID: PMC6874895 DOI: 10.1016/j.mri.2019.06.009] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2019] [Revised: 06/06/2019] [Accepted: 06/08/2019] [Indexed: 12/17/2022]
Abstract
Deep learning algorithms and in particular convolutional networks have shown tremendous success in medical image analysis applications, though relatively few methods have been applied to infant MRI data due numerous inherent challenges such as inhomogenous tissue appearance across the image, considerable image intensity variability across the first year of life, and a low signal to noise setting. This paper presents methods addressing these challenges in two selected applications, specifically infant brain tissue segmentation at the isointense stage and presymptomatic disease prediction in neurodevelopmental disorders. Corresponding methods are reviewed and compared, and open issues are identified, namely low data size restrictions, class imbalance problems, and lack of interpretation of the resulting deep learning solutions. We discuss how existing solutions can be adapted to approach these issues as well as how generative models seem to be a particularly strong contender to address them.
Collapse
Affiliation(s)
- Mahmoud Mostapha
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States of America.
| | - Martin Styner
- Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States of America; Neuro Image Research and Analysis Lab, Department of Psychiatry, University of North Carolina at Chapel Hill, NC 27599, United States of America.
| |
Collapse
|
11
|
Deep CNN ensembles and suggestive annotations for infant brain MRI segmentation. Comput Med Imaging Graph 2019; 79:101660. [PMID: 31785402 DOI: 10.1016/j.compmedimag.2019.101660] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 08/30/2019] [Accepted: 09/24/2019] [Indexed: 01/02/2023]
Abstract
Precise 3D segmentation of infant brain tissues is an essential step towards comprehensive volumetric studies and quantitative analysis of early brain development. However, computing such segmentations is very challenging, especially for 6-month infant brain, due to the poor image quality, among other difficulties inherent to infant brain MRI, e.g., the isointense contrast between white and gray matter and the severe partial volume effect due to small brain sizes. This study investigates the problem with an ensemble of semi-dense fully convolutional neural networks (CNNs), which employs T1-weighted and T2-weighted MR images as input. We demonstrate that the ensemble agreement is highly correlated with the segmentation errors. Therefore, our method provides measures that can guide local user corrections. To the best of our knowledge, this work is the first ensemble of 3D CNNs for suggesting annotations within images. Our quasi-dense architecture allows the efficient propagation of gradients during training, while limiting the number of parameters, requiring one order of magnitude less parameters than popular medical image segmentation networks such as 3D U-Net (Çiçek, et al.). We also investigated the impact that early or late fusions of multiple image modalities might have on the performances of deep architectures. We report evaluations of our method on the public data of the MICCAI iSEG-2017 Challenge on 6-month infant brain MRI segmentation, and show very competitive results among 21 teams, ranking first or second in most metrics.
Collapse
|
12
|
Lee S, Stewart J, Lee Y, Myrehaug S, Sahgal A, Ruschin M, Tseng CL. Improved dosimetric accuracy with semi-automatic contour propagation of organs-at-risk in glioblastoma patients undergoing chemoradiation. J Appl Clin Med Phys 2019; 20:45-53. [PMID: 31670900 PMCID: PMC6909175 DOI: 10.1002/acm2.12758] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2019] [Revised: 09/06/2019] [Accepted: 10/03/2019] [Indexed: 11/22/2022] Open
Abstract
Background We study the changes in organs‐at‐risk (OARs) morphology as contoured on serial MRIs during chemoradiation therapy (CRT) of glioblastoma (GBM). The dosimetric implication of assuming non‐deformable OAR changes and the accuracy and feasibility of semi‐automatic OAR contour propagation are investigated. Methods Fourteen GBM patients who were treated with adjuvant CRT for GBM prospectively underwent MRIs on fractions 0 (i.e., planning), 10, 20, and 1 month post last fraction of CRT. Three sets of OAR contours — (a) manual, (b) rigidly registered (static), and (c) semi‐automatically propagated — were compared using Dice similarity coefficient (DSC) and Hausdorff distance (HD). Dosimetric impact was determined by comparing the minimum dose to the 0.03 cc receiving the highest dose (D0.03 cc) on a clinically approved reference, non‐adapted radiation therapy plan. Results The DSC between the manual contours and the static contours decreased significantly over time (fraction 10: [mean ± 1 SD] 0.78 ± 0.17, post 1 month: 0.76 ± 0.17, P = 0.02) while the HD (P = 0.74) and the difference in D0.03cc did not change significantly (P = 0.51). Using the manual contours as reference, compared to static contours, propagated contours have a significantly higher DSC (propagated: [mean ± 1 SD] 0.81 ± 0.15, static: 0.77 ± 0.17, P < 0.001), lower HD (propagated: 3.77 ± 1.8 mm, static: 3.96 ± 1.6 mm, P = 0.002), and a significantly lower absolute difference in D0.03cc (propagated: 101 ± 159 cGy, static: 136 ± 243 cGy, P = 0.019). Conclusions Nonrigid changes in OARs over time lead to different maximum doses than planned. By using semi‐automatic OAR contour propagation, OARs are more accurately delineated on subsequent fractions, with corresponding improved accuracy of the reported dose to the OARs.
Collapse
Affiliation(s)
- Sangjune Lee
- Department of Radiation Oncology, Sunnybrook Odette Cancer Centre, University of Toronto, Toronto, ON, Canada
| | - James Stewart
- Department of Radiation Oncology, Sunnybrook Odette Cancer Centre, University of Toronto, Toronto, ON, Canada
| | - Young Lee
- Department of Radiation Oncology, Sunnybrook Odette Cancer Centre, University of Toronto, Toronto, ON, Canada
| | - Sten Myrehaug
- Department of Radiation Oncology, Sunnybrook Odette Cancer Centre, University of Toronto, Toronto, ON, Canada
| | - Arjun Sahgal
- Department of Radiation Oncology, Sunnybrook Odette Cancer Centre, University of Toronto, Toronto, ON, Canada
| | - Mark Ruschin
- Department of Radiation Oncology, Sunnybrook Odette Cancer Centre, University of Toronto, Toronto, ON, Canada
| | - Chia-Lin Tseng
- Department of Radiation Oncology, Sunnybrook Odette Cancer Centre, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
13
|
Tabrizi PR, Obeid R, Cerrolaza JJ, Penn A, Mansoor A, Linguraru MG. Automatic Segmentation of Neonatal Ventricles from Cranial Ultrasound for Prediction of Intraventricular Hemorrhage Outcome. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2019; 2018:3136-3139. [PMID: 30441059 DOI: 10.1109/embc.2018.8513097] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Intraventricular hemorrhage (IVH) followed by post hemorrhagic hydrocephalus (PHH) in premature neonates is one of the recognized reasons of brain injury in newborns. Cranial ultrasound (CUS) is a noninvasive imaging tool that has been used widely to diagnose and monitor neonates with IVH. In our previous work, we showed the potential of quantitative morphological analysis of lateral ventricles from early CUS to predict the PHH outcome in neonates with IVH. In this paper, we first present a new automatic method for ventricle segmentation in 2D CUS images. We detect the brain bounding box and brain mid-line to estimate the anatomical positions of ventricles and correct the brain rotation. The ventricles are segmented using a combination of fuzzy c-means, phase congruency, and active contour algorithms. Finally, we compare this fully automated approach with our previous work for the prediction of the outcome of PHH on a set of 2D CUS images taken from 60 premature neonates with different IVH grades. Experimental results showed that our method could segment ventricles with an average Dice similarity coefficient of 0.8 ± 0.12. In addition, our fully automated method could predict the outcome of PHH based on the extracted ventricle regions with similar accuracy to our previous semi-automated approach (83% vs. 84%, respectively, p-value = 0.8). This method has the potential to standardize the evaluation of CUS images and can be a helpful clinical tool for early monitoring and treatment of IVH and PHH.
Collapse
|
14
|
Pagnozzi AM, Fripp J, Rose SE. Quantifying deep grey matter atrophy using automated segmentation approaches: A systematic review of structural MRI studies. Neuroimage 2019; 201:116018. [PMID: 31319182 DOI: 10.1016/j.neuroimage.2019.116018] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Revised: 07/01/2019] [Accepted: 07/12/2019] [Indexed: 12/13/2022] Open
Abstract
The deep grey matter (DGM) nuclei of the brain play a crucial role in learning, behaviour, cognition, movement and memory. Although automated segmentation strategies can provide insight into the impact of multiple neurological conditions affecting these structures, such as Multiple Sclerosis (MS), Huntington's disease (HD), Alzheimer's disease (AD), Parkinson's disease (PD) and Cerebral Palsy (CP), there are a number of technical challenges limiting an accurate automated segmentation of the DGM. Namely, the insufficient contrast of T1 sequences to completely identify the boundaries of these structures, as well as the presence of iso-intense white matter lesions or extensive tissue loss caused by brain injury. Therefore in this systematic review, 269 eligible studies were analysed and compared to determine the optimal approaches for addressing these technical challenges. The automated approaches used among the reviewed studies fall into three broad categories, atlas-based approaches focusing on the accurate alignment of atlas priors, algorithmic approaches which utilise intensity information to a greater extent, and learning-based approaches that require an annotated training set. Studies that utilise freely available software packages such as FIRST, FreeSurfer and LesionTOADS were also eligible, and their performance compared. Overall, deep learning approaches achieved the best overall performance, however these strategies are currently hampered by the lack of large-scale annotated data. Improving model generalisability to new datasets could be achieved in future studies with data augmentation and transfer learning. Multi-atlas approaches provided the second-best performance overall, and may be utilised to construct a "silver standard" annotated training set for deep learning. To address the technical challenges, providing robustness to injury can be improved by using multiple channels, highly elastic diffeomorphic transformations such as LDDMM, and by following atlas-based approaches with an intensity driven refinement of the segmentation, which has been done with the Expectation Maximisation (EM) and level sets methods. Accounting for potential lesions should be achieved with a separate lesion segmentation approach, as in LesionTOADS. Finally, to address the issue of limited contrast, R2*, T2* and QSM sequences could be used to better highlight the DGM due to its higher iron content. Future studies could look to additionally acquire these sequences by retaining the phase information from standard structural scans, or alternatively acquiring these sequences for only a training set, allowing models to learn the "improved" segmentation from T1-sequences alone.
Collapse
Affiliation(s)
- Alex M Pagnozzi
- CSIRO Health and Biosecurity, The Australian e-Health Research Centre, Brisbane, Australia.
| | - Jurgen Fripp
- CSIRO Health and Biosecurity, The Australian e-Health Research Centre, Brisbane, Australia
| | - Stephen E Rose
- CSIRO Health and Biosecurity, The Australian e-Health Research Centre, Brisbane, Australia
| |
Collapse
|
15
|
Lin X, Li X. Image Based Brain Segmentation: From Multi-Atlas Fusion to Deep Learning. Curr Med Imaging 2019; 15:443-452. [DOI: 10.2174/1573405614666180817125454] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2017] [Revised: 07/28/2018] [Accepted: 08/07/2018] [Indexed: 01/10/2023]
Abstract
Background:
This review aims to identify the development of the algorithms for brain
tissue and structure segmentation in MRI images.
Discussion:
Starting from the results of the Grand Challenges on brain tissue and structure segmentation
held in Medical Image Computing and Computer-Assisted Intervention (MICCAI), this
review analyses the development of the algorithms and discusses the tendency from multi-atlas label
fusion to deep learning. The intrinsic characteristics of the winners’ algorithms on the Grand
Challenges from the year 2012 to 2018 are analyzed and the results are compared carefully.
Conclusion:
Although deep learning has got higher rankings in the challenge, it has not yet met the
expectations in terms of accuracy. More effective and specialized work should be done in the future.
Collapse
Affiliation(s)
- Xiangbo Lin
- Faculty of Electronic Information and Electrical Engineering, School of Information and Communication Engineering, Dalian University of Technology, Dalian, LiaoNing Province, China
| | - Xiaoxi Li
- Faculty of Electronic Information and Electrical Engineering, School of Information and Communication Engineering, Dalian University of Technology, Dalian, LiaoNing Province, China
| |
Collapse
|
16
|
Agn M, Munck Af Rosenschöld P, Puonti O, Lundemann MJ, Mancini L, Papadaki A, Thust S, Ashburner J, Law I, Van Leemput K. A modality-adaptive method for segmenting brain tumors and organs-at-risk in radiation therapy planning. Med Image Anal 2019; 54:220-237. [PMID: 30952038 PMCID: PMC6554451 DOI: 10.1016/j.media.2019.03.005] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/14/2019] [Accepted: 03/21/2019] [Indexed: 12/25/2022]
Abstract
In this paper we present a method for simultaneously segmenting brain tumors and an extensive set of organs-at-risk for radiation therapy planning of glioblastomas. The method combines a contrast-adaptive generative model for whole-brain segmentation with a new spatial regularization model of tumor shape using convolutional restricted Boltzmann machines. We demonstrate experimentally that the method is able to adapt to image acquisitions that differ substantially from any available training data, ensuring its applicability across treatment sites; that its tumor segmentation accuracy is comparable to that of the current state of the art; and that it captures most organs-at-risk sufficiently well for radiation therapy planning purposes. The proposed method may be a valuable step towards automating the delineation of brain tumors and organs-at-risk in glioblastoma patients undergoing radiation therapy.
Collapse
Affiliation(s)
- Mikael Agn
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark.
| | - Per Munck Af Rosenschöld
- Radiation Physics, Department of Hematology, Oncology and Radiation Physics, Skåne University Hospital, Lund, Sweden
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, Denmark
| | - Michael J Lundemann
- Department of Oncology, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Laura Mancini
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Anastasia Papadaki
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - Steffi Thust
- Neuroradiological Academic Unit, Department of Brain Repair and Rehabilitation, UCL Institute of Neurology, University College London, UK; Lysholm Department of Neuroradiology, National Hospital for Neurology and Neurosurgery, UCLH NHS Foundation Trust, UK
| | - John Ashburner
- Wellcome Centre for Human Neuroimaging, UCL Institute of Neurology, University College London, UK
| | - Ian Law
- Department of Clinical Physiology, Nuclear Medicine and PET, Copenhagen University Hospital Rigshospitalet, Denmark
| | - Koen Van Leemput
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Denmark; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA
| |
Collapse
|
17
|
|
18
|
Jacob J, Durand T, Feuvret L, Mazeron JJ, Delattre JY, Hoang-Xuan K, Psimaras D, Douzane H, Ribeiro M, Capelle L, Carpentier A, Ricard D, Maingon P. Cognitive impairment and morphological changes after radiation therapy in brain tumors: A review. Radiother Oncol 2018; 128:221-228. [PMID: 30041961 DOI: 10.1016/j.radonc.2018.05.027] [Citation(s) in RCA: 49] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2018] [Revised: 05/25/2018] [Accepted: 05/28/2018] [Indexed: 11/18/2022]
Abstract
Life expectancy of patients treated for brain tumors has lengthened due to the therapeutic improvements. Cognitive impairment has been described following brain radiotherapy, but the mechanisms leading to this adverse event remain mostly unknown. Technical evolutions aim at enhancing the therapeutic ratio. Sparing of the healthy tissues has been improved using various approaches; however, few dose constraints have been established regarding brain structures associated with cognitive functions. The aims of this literature review are to report the main brain areas involved in cognitive adverse effects induced by radiotherapy as described in literature, to better understand brain radiosensitivity and to describe potential future improvements.
Collapse
Affiliation(s)
- Julian Jacob
- Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Radiation Oncology, France; Sorbonne Université, CNRS, Service de Santé des Armées, Cognition and Action Group, Paris, France.
| | - Thomas Durand
- Sorbonne Université, CNRS, Service de Santé des Armées, Cognition and Action Group, Paris, France; Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Neurology, France
| | - Loïc Feuvret
- Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Radiation Oncology, France
| | - Jean-Jacques Mazeron
- Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Radiation Oncology, France
| | - Jean-Yves Delattre
- Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Neurology, France; Sorbonne Université, INSERM, CNRS, Assistance Publique-Hôpitaux de Paris, Institut du Cerveau et de la Moelle épinière, France
| | - Khê Hoang-Xuan
- Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Neurology, France; Sorbonne Université, INSERM, CNRS, Assistance Publique-Hôpitaux de Paris, Institut du Cerveau et de la Moelle épinière, France
| | - Dimitri Psimaras
- Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Neurology, France; Sorbonne Université, INSERM, CNRS, Assistance Publique-Hôpitaux de Paris, Institut du Cerveau et de la Moelle épinière, France
| | - Hassen Douzane
- Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Neurology, France
| | - Monica Ribeiro
- Sorbonne Université, CNRS, Service de Santé des Armées, Cognition and Action Group, Paris, France; Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Neurology, France
| | - Laurent Capelle
- Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Neurosurgery, France
| | - Alexandre Carpentier
- Sorbonne Université, INSERM, CNRS, Assistance Publique-Hôpitaux de Paris, Institut du Cerveau et de la Moelle épinière, France; Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Neurosurgery, France
| | - Damien Ricard
- Sorbonne Université, CNRS, Service de Santé des Armées, Cognition and Action Group, Paris, France; Service de Santé des Armées, Hôpital d'Instruction des Armées Percy, Department of Neurology, Clamart, France; Service de Santé des Armées, Ecole du Val-de-Grâce, Paris, France
| | - Philippe Maingon
- Sorbonne Université, Assistance Publique-Hôpitaux de Paris, Groupe Hospitalier Pitié-Salpêtrière-Charles Foix, Department of Radiation Oncology, France
| |
Collapse
|
19
|
Rapid fully automatic segmentation of subcortical brain structures by shape-constrained surface adaptation. Med Image Anal 2018; 46:146-161. [DOI: 10.1016/j.media.2018.03.001] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2017] [Revised: 02/23/2018] [Accepted: 03/08/2018] [Indexed: 11/18/2022]
|
20
|
3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study. Neuroimage 2018; 170:456-470. [DOI: 10.1016/j.neuroimage.2017.04.039] [Citation(s) in RCA: 219] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2016] [Revised: 02/23/2017] [Accepted: 04/17/2017] [Indexed: 01/08/2023] Open
|
21
|
Internal and external validation of an ESTRO delineation guideline – dependent automated segmentation tool for loco-regional radiation therapy of early breast cancer. Radiother Oncol 2016; 121:424-430. [DOI: 10.1016/j.radonc.2016.09.005] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2016] [Revised: 09/18/2016] [Accepted: 09/18/2016] [Indexed: 12/25/2022]
|
22
|
A review on brain structures segmentation in magnetic resonance imaging. Artif Intell Med 2016; 73:45-69. [DOI: 10.1016/j.artmed.2016.09.001] [Citation(s) in RCA: 83] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2016] [Revised: 07/27/2016] [Accepted: 09/05/2016] [Indexed: 11/18/2022]
|
23
|
Dolz J, Betrouni N, Quidet M, Kharroubi D, Leroy HA, Reyns N, Massoptier L, Vermandel M. Stacking denoising auto-encoders in a deep network to segment the brainstem on MRI in brain cancer patients: A clinical study. Comput Med Imaging Graph 2016; 52:8-18. [DOI: 10.1016/j.compmedimag.2016.03.003] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2015] [Revised: 02/24/2016] [Accepted: 03/22/2016] [Indexed: 10/21/2022]
|
24
|
|