51
|
Zhuang Y, Liu H, Song E, Ma G, Xu X, Hung CC. APRNet: A 3D Anisotropic Pyramidal Reversible Network with Multi-modal Cross-Dimension Attention for Brain Tissue Segmentation in MR Images. IEEE J Biomed Health Inform 2021; 26:749-761. [PMID: 34197331 DOI: 10.1109/jbhi.2021.3093932] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Brain tissue segmentation in multi-modal magnetic resonance (MR) images is significant for the clinical diagnosis of brain diseases. Due to blurred boundaries, low contrast, and intricate anatomical relationships between brain tissue regions, automatic brain tissue segmentation without prior knowledge is still challenging. This paper presents a novel 3D fully convolutional network (FCN) for brain tissue segmentation, called APRNet. In this network, we first propose a 3D anisotropic pyramidal convolutional reversible residual sequence (3DAPC-RRS) module to integrate the intra-slice information with the inter-slice information without significant memory consumption; secondly, we design a multi-modal cross-dimension attention (MCDA) module to automatically capture the effective information in each dimension of multi-modal images; then, we apply 3DAPC-RRS modules and MCDA modules to a 3D FCN with multiple encoded streams and one decoded stream for constituting the overall architecture of APRNet. We evaluated APRNet on two benchmark challenges, namely MRBrainS13 and iSeg-2017. The experimental results show that APRNet yields state-of-the-art segmentation results on both benchmark challenge datasets and achieves the best segmentation performance on the cerebrospinal fluid region. Compared with other methods, our proposed approach exploits the complementary information of different modalities to segment brain tissue regions in both adult and infant MR images, and it achieves the average Dice coefficient of 87.22% and 93.03% on the MRBrainS13 and iSeg-2017 testing data, respectively. The proposed method is beneficial for quantitative brain analysis in the clinical study, and our code is made publicly available.
Collapse
|
52
|
Bandyk MG, Gopireddy DR, Lall C, Balaji KC, Dolz J. MRI and CT bladder segmentation from classical to deep learning based approaches: Current limitations and lessons. Comput Biol Med 2021; 134:104472. [PMID: 34023696 DOI: 10.1016/j.compbiomed.2021.104472] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 04/29/2021] [Accepted: 05/02/2021] [Indexed: 10/21/2022]
Abstract
Precise determination and assessment of bladder cancer (BC) extent of muscle invasion involvement guides proper risk stratification and personalized therapy selection. In this context, segmentation of both bladder walls and cancer are of pivotal importance, as it provides invaluable information to stage the primary tumor. Hence, multiregion segmentation on patients presenting with symptoms of bladder tumors using deep learning heralds a new level of staging accuracy and prediction of the biologic behavior of the tumor. Nevertheless, despite the success of these models in other medical problems, progress in multiregion bladder segmentation, particularly in MRI and CT modalities, is still at a nascent stage, with just a handful of works tackling a multiregion scenario. Furthermore, most existing approaches systematically follow prior literature in other clinical problems, without casting a doubt on the validity of these methods on bladder segmentation, which may present different challenges. Inspired by this, we provide an in-depth look at bladder cancer segmentation using deep learning models. The critical determinants for accurate differentiation of muscle invasive disease, current status of deep learning based bladder segmentation, lessons and limitations of prior work are highlighted.
Collapse
Affiliation(s)
- Mark G Bandyk
- Department of Urology, University of Florida, Jacksonville, FL, USA.
| | | | - Chandana Lall
- Department of Radiology, University of Florida, Jacksonville, FL, USA
| | - K C Balaji
- Department of Urology, University of Florida, Jacksonville, FL, USA
| | | |
Collapse
|
53
|
Learning U-Net Based Multi-Scale Features in Encoding-Decoding for MR Image Brain Tissue Segmentation. SENSORS 2021; 21:s21093232. [PMID: 34067101 PMCID: PMC8124734 DOI: 10.3390/s21093232] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 04/27/2021] [Accepted: 04/28/2021] [Indexed: 11/17/2022]
Abstract
Accurate brain tissue segmentation of MRI is vital to diagnosis aiding, treatment planning, and neurologic condition monitoring. As an excellent convolutional neural network (CNN), U-Net is widely used in MR image segmentation as it usually generates high-precision features. However, the performance of U-Net is considerably restricted due to the variable shapes of the segmented targets in MRI and the information loss of down-sampling and up-sampling operations. Therefore, we propose a novel network by introducing spatial and channel dimensions-based multi-scale feature information extractors into its encoding-decoding framework, which is helpful in extracting rich multi-scale features while highlighting the details of higher-level features in the encoding part, and recovering the corresponding localization to a higher resolution layer in the decoding part. Concretely, we propose two information extractors, multi-branch pooling, called MP, in the encoding part, and multi-branch dense prediction, called MDP, in the decoding part, to extract multi-scale features. Additionally, we designed a new multi-branch output structure with MDP in the decoding part to form more accurate edge-preserving predicting maps by integrating the dense adjacent prediction features at different scales. Finally, the proposed method is tested on datasets MRbrainS13, IBSR18, and ISeg2017. We find that the proposed network performs higher accuracy in segmenting MRI brain tissues and it is better than the leading method of 2018 at the segmentation of GM and CSF. Therefore, it can be a useful tool for diagnostic applications, such as brain MRI segmentation and diagnosing.
Collapse
|
54
|
Sun Y, Gao K, Wu Z, Li G, Zong X, Lei Z, Wei Y, Ma J, Yang X, Feng X, Zhao L, Le Phan T, Shin J, Zhong T, Zhang Y, Yu L, Li C, Basnet R, Ahmad MO, Swamy MNS, Ma W, Dou Q, Bui TD, Noguera CB, Landman B, Gotlib IH, Humphreys KL, Shultz S, Li L, Niu S, Lin W, Jewells V, Shen D, Li G, Wang L. Multi-Site Infant Brain Segmentation Algorithms: The iSeg-2019 Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1363-1376. [PMID: 33507867 PMCID: PMC8246057 DOI: 10.1109/tmi.2021.3055428] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
To better understand early brain development in health and disorder, it is critical to accurately segment infant brain magnetic resonance (MR) images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF). Deep learning-based methods have achieved state-of-the-art performance; h owever, one of the major limitations is that the learning-based methods may suffer from the multi-site issue, that is, the models trained on a dataset from one site may not be applicable to the datasets acquired from other sites with different imaging protocols/scanners. To promote methodological development in the community, the iSeg-2019 challenge (http://iseg2019.web.unc.edu) provides a set of 6-month infant subjects from multiple sites with different protocols/scanners for the participating methods. T raining/validation subjects are from UNC (MAP) and testing subjects are from UNC/UMN (BCP), Stanford University, and Emory University. By the time of writing, there are 30 automatic segmentation methods participated in the iSeg-2019. In this article, 8 top-ranked methods were reviewed by detailing their pipelines/implementations, presenting experimental results, and evaluating performance across different sites in terms of whole brain, regions of interest, and gyral landmark curves. We further pointed out their limitations and possible directions for addressing the multi-site issue. We find that multi-site consistency is still an open issue. We hope that the multi-site dataset in the iSeg-2019 and this review article will attract more researchers to address the challenging and critical multi-site issue in practice.
Collapse
|
55
|
Ghosal P, Chowdhury T, Kumar A, Bhadra AK, Chakraborty J, Nandi D. MhURI:A Supervised Segmentation Approach to Leverage Salient Brain Tissues in Magnetic Resonance Images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 200:105841. [PMID: 33221057 PMCID: PMC9096474 DOI: 10.1016/j.cmpb.2020.105841] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Accepted: 11/07/2020] [Indexed: 05/09/2023]
Abstract
BACKGROUND AND OBJECTIVES Accurate segmentation of critical tissues from a brain MRI is pivotal for characterization and quantitative pattern analysis of the human brain and thereby, identifies the earliest signs of various neurodegenerative diseases. To date, in most cases, it is done manually by the radiologists. The overwhelming workload in some of the thickly populated nations may cause exhaustion leading to interruption for the doctors, which may pose a continuing threat to patient safety. A novel fusion method called U-Net inception based on 3D convolutions and transition layers is proposed to address this issue. METHODS A 3D deep learning method called Multi headed U-Net with Residual Inception (MhURI) accompanied by Morphological Gradient channel for brain tissue segmentation is proposed, which incorporates Residual Inception 2-Residual (RI2R) module as the basic building block. The model exploits the benefits of morphological pre-processing for structural enhancement of MR images. A multi-path data encoding pipeline is introduced on top of the U-Net backbone, which encapsulates initial global features and captures the information from each MRI modality. RESULTS The proposed model has accomplished encouraging outcomes, which appreciates the adequacy in terms of some of the established quality metrices when compared with some of the state-of-the-art methods while evaluating with respect to two popular publicly available data sets. CONCLUSION The model is entirely automatic and able to segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) from brain MRI effectively with sufficient accuracy. Hence, it may be considered to be a potential computer-aided diagnostic (CAD) tool for radiologists and other medical practitioners in their clinical diagnosis workflow.
Collapse
Affiliation(s)
- Palash Ghosal
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Tamal Chowdhury
- Department of Electronics and Communication Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Amish Kumar
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| | - Ashok Kumar Bhadra
- Department of Radiology, KPC Medical College and Hospital, Jadavpur, 700032, West Bengal, India.
| | - Jayasree Chakraborty
- Department of Hepatopancreatobiliary Service, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA.
| | - Debashis Nandi
- Department of Computer Science and Engineering, National Institute of Technology Durgapur-713209, West Bengal, India.
| |
Collapse
|
56
|
Ma J, He J, Yang X. Learning Geodesic Active Contours for Embedding Object Global Information in Segmentation CNNs. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:93-104. [PMID: 32897860 DOI: 10.1109/tmi.2020.3022693] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Most existing CNNs-based segmentation methods rely on local appearances learned on the regular image grid, without consideration of the object global information. This article aims to embed the object global geometric information into a learning framework via the classical geodesic active contours (GAC). We propose a level set function (LSF) regression network, supervised by the segmentation ground truth, LSF ground truth and geodesic active contours, to not only generate the segmentation probabilistic map but also directly minimize the GAC energy functional in an end-to-end manner. With the help of geodesic active contours, the segmentation contour, embedded in the level set function, can be globally driven towards the image boundary to obtain lower energy, and the geodesic constraint can lead the segmentation result to have fewer outliers. Extensive experiments on four public datasets show that (1) compared with state-of-the-art (SOTA) learning active contour methods, our method can achieve significantly better performance; (2) compared with recent SOTA methods that are designed for reducing boundary errors, our method also outperforms them with more accurate boundaries; (3) compared with SOTA methods on two popular multi-class segmentation challenge datasets, our method can still obtain superior or competitive results in both organ and tumor segmentation tasks. Our study demonstrates that introducing global information by GAC can significantly improve segmentation performance, especially on reducing the boundary errors and outliers, which is very useful in applications such as organ transplantation surgical planning and multi-modality image registration where boundary errors can be very harmful.
Collapse
|
57
|
Chen J, Fang Z, Zhang G, Ling L, Li G, Zhang H, Wang L. Automatic brain extraction from 3D fetal MR image with deep learning-based multi-step framework. Comput Med Imaging Graph 2020; 88:101848. [PMID: 33385932 DOI: 10.1016/j.compmedimag.2020.101848] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 11/15/2020] [Accepted: 12/07/2020] [Indexed: 12/21/2022]
Abstract
Brain extraction is a fundamental prerequisite step in neuroimage analysis for fetus. Due to surrounding maternal tissues and unpredictable movement, brain extraction from fetal Magnetic Resonance (MR) images is a challenging task. In this paper, we propose a novel deep learning-based multi-step framework for brain extraction from 3D fetal MR images. In the first step, a global localization network is applied to estimate probability maps for brain candidates. Connected-component labeling algorithm is applied to eliminate small erroneous components and accurately locate the candidate brain area. In the second step, a local refinement network is implemented in the brain candidate area to obtain fine-grained probability maps. Final extraction results are derived by a fusion network with the two cascaded probability maps obtained from previous two steps. Experimental results demonstrate that our proposed method has superior performance compared with existing deep learning-based methods.
Collapse
Affiliation(s)
- Jian Chen
- School of Electronic, Electrical Engineering and Physics, Fujian University of Technology, Fuzhou, Fujian, 350118, China.
| | - Zhenghan Fang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, 27517, USA
| | - Guofu Zhang
- Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, 200011, China
| | - Lei Ling
- Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, 200011, China
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, 27517, USA
| | - He Zhang
- Department of Radiology, Obstetrics and Gynecology Hospital, Fudan University, Shanghai, 200011, China.
| | - Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, NC, 27517, USA.
| |
Collapse
|
58
|
DIKA-Nets: Domain-invariant knowledge-guided attention networks for brain skull stripping of early developing macaques. Neuroimage 2020; 227:117649. [PMID: 33338616 DOI: 10.1016/j.neuroimage.2020.117649] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 12/02/2020] [Accepted: 12/03/2020] [Indexed: 01/18/2023] Open
Abstract
As non-human primates, macaques have a close phylogenetic relationship to human beings and have been proven to be a valuable and widely used animal model in human neuroscience research. Accurate skull stripping (aka. brain extraction) of brain magnetic resonance imaging (MRI) is a crucial prerequisite in neuroimaging analysis of macaques. Most of the current skull stripping methods can achieve satisfactory results for human brains, but when applied to macaque brains, especially during early brain development, the results are often unsatisfactory. In fact, the early dynamic, regionally-heterogeneous development of macaque brains, accompanied by poor and age-related contrast between different anatomical structures, poses significant challenges for accurate skull stripping. To overcome these challenges, we propose a fully-automated framework to effectively fuse the age-specific intensity information and domain-invariant prior knowledge as important guiding information for robust skull stripping of developing macaques from 0 to 36 months of age. Specifically, we generate Signed Distance Map (SDM) and Center of Gravity Distance Map (CGDM) based on the intermediate segmentation results as guidance. Instead of using local convolution, we fuse all information using the Dual Self-Attention Module (DSAM), which can capture global spatial and channel-dependent information of feature maps. To extensively evaluate the performance, we adopt two relatively-large challenging MRI datasets from rhesus macaques and cynomolgus macaques, respectively, with a total of 361 scans from two different scanners with different imaging protocols. We perform cross-validation by using one dataset for training and the other one for testing. Our method outperforms five popular brain extraction tools and three deep-learning-based methods on cross-source MRI datasets without any transfer learning.
Collapse
|
59
|
Ramanarayanan S, Murugesan B, Kalyanasundaram A, Prabhakaran S, Ram K, Patil S, Sivaprakasam M. MRI Super-Resolution using Laplacian Pyramid Convolutional Neural Networks with Isotropic Undecimated Wavelet Loss. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1584-1587. [PMID: 33018296 DOI: 10.1109/embc44109.2020.9176100] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
High spatial resolution of Magnetic Resonance images (MRI) provide rich structural details to facilitate accurate diagnosis and quantitative image analysis. However the long acquisition time of MRI leads to patient discomfort and possible motion artifacts in the reconstructed image. Single Image Super-Resolution (SISR) using Convolutional Neural networks (CNN) is an emerging trend in biomedical imaging especially Magnetic Resonance (MR) image analysis for image post processing. An efficient choice of SISR architecture is required to achieve better quality reconstruction. In addition, a robust choice of loss function together with the domain in which these loss functions operate play an important role in enhancing the fine structural details as well as removing the blurring effects to form a high resolution image. In this work, we propose a novel combined loss function consisting of an L1 Charbonnier loss function in the image domain and a wavelet domain loss function called the Isotropic Undecimated Wavelet loss (IUW loss) to train the existing Laplacian Pyramid Super-Resolution CNN. The proposed loss function was evaluated on three MRI datasets - privately collected Knee MRI dataset and the publicly available Kirby21 brain and iSeg infant brain datasets and on benchmark SISR datasets for natural images. Experimental analysis shows promising results with better recovery of structure and improvements in qualitative metrics.
Collapse
|
60
|
|
61
|
Sun Y, Gao K, Niu S, Lin W, Li G, Wang L. Semi-supervised Transfer Learning for Infant Cerebellum Tissue Segmentation. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2020; 12436:663-673. [PMID: 33598664 PMCID: PMC7885085 DOI: 10.1007/978-3-030-59861-7_67] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
Abstract
To characterize early cerebellum development, accurate segmentation of the cerebellum into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) tissues is one of the most pivotal steps. However, due to the weak tissue contrast, extremely folded tiny structures, and severe partial volume effect, infant cerebellum tissue segmentation is especially challenging, and the manual labels are hard to obtain and correct for learning-based methods. To the best of our knowledge, there is no work on the cerebellum segmentation for infant subjects less than 24 months of age. In this work, we develop a semi-supervised transfer learning framework guided by a confidence map for tissue segmentation of cerebellum MR images from 24-month-old to 6-month-old infants. Note that only 24-month-old subjects have reliable manual labels for training, due to their high tissue contrast. Through the proposed semi-supervised transfer learning, the labels from 24-month-old subjects are gradually propagated to the 18-, 12-, and 6-month-old subjects, which have a low tissue contrast. Comparison with the state-of-the-art methods demonstrates the superior performance of the proposed method, especially for 6-month-old subjects.
Collapse
Affiliation(s)
- Yue Sun
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Kun Gao
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Sijie Niu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
62
|
Pei Y, Wang L, Zhao F, Zhong T, Liao L, Shen D, Li G. Anatomy-Guided Convolutional Neural Network for Motion Correction in Fetal Brain MRI. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2020; 12436:384-393. [PMID: 33644782 PMCID: PMC7912521 DOI: 10.1007/978-3-030-59861-7_39] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Fetal Magnetic Resonance Imaging (MRI) is challenged by the fetal movements and maternal breathing. Although fast MRI sequences allow artifact free acquisition of individual 2D slices, motion commonly occurs in between slices acquisitions. Motion correction for each slice is thus very important for reconstruction of 3D fetal brain MRI, but is highly operator-dependent and time-consuming. Approaches based on convolutional neural networks (CNNs) have achieved encouraging performance on prediction of 3D motion parameters of arbitrarily oriented 2D slices, which, however, does not capitalize on important brain structural information. To address this problem, we propose a new multi-task learning framework to jointly learn the transformation parameters and tissue segmentation map of each slice, for providing brain anatomical information to guide the mapping from 2D slices to 3D volumetric space in a coarse to fine manner. In the coarse stage, the first network learns the features shared for both regression and segmentation tasks. In the refinement stage, to fully utilize the anatomical information, distance maps constructed based on the coarse segmentation are introduced to the second network. Finally, incorporation of the signed distance maps to guide the regression and segmentation together improves the performance in both tasks. Experimental results indicate that the proposed method achieves superior performance in reducing the motion prediction error and obtaining satisfactory tissue segmentation results simultaneously, compared with state-of-the-art methods.
Collapse
Affiliation(s)
- Yuchen Pei
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Lisheng Wang
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
| | - Fenqiang Zhao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Tao Zhong
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Lufan Liao
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Gang Li
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
63
|
Zhao Y, Li P, Gao C, Liu Y, Chen Q, Yang F, Meng D. TSASNet: Tooth segmentation on dental panoramic X-ray images by Two-Stage Attention Segmentation Network. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.106338] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
|
64
|
Zöllei L, Iglesias JE, Ou Y, Grant PE, Fischl B. Infant FreeSurfer: An automated segmentation and surface extraction pipeline for T1-weighted neuroimaging data of infants 0-2 years. Neuroimage 2020; 218:116946. [PMID: 32442637 PMCID: PMC7415702 DOI: 10.1016/j.neuroimage.2020.116946] [Citation(s) in RCA: 101] [Impact Index Per Article: 20.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 03/03/2020] [Accepted: 05/12/2020] [Indexed: 01/23/2023] Open
Abstract
The development of automated tools for brain morphometric analysis in infants has lagged significantly behind analogous tools for adults. This gap reflects the greater challenges in this domain due to: 1) a smaller-scaled region of interest, 2) increased motion corruption, 3) regional changes in geometry due to heterochronous growth, and 4) regional variations in contrast properties corresponding to ongoing myelination and other maturation processes. Nevertheless, there is a great need for automated image-processing tools to quantify differences between infant groups and other individuals, because aberrant cortical morphologic measurements (including volume, thickness, surface area, and curvature) have been associated with neuropsychiatric, neurologic, and developmental disorders in children. In this paper we present an automated segmentation and surface extraction pipeline designed to accommodate clinical MRI studies of infant brains in a population 0-2 year-olds. The algorithm relies on a single channel of T1-weighted MR images to achieve automated segmentation of cortical and subcortical brain areas, producing volumes of subcortical structures and surface models of the cerebral cortex. We evaluated the algorithm both qualitatively and quantitatively using manually labeled datasets, relevant comparator software solutions cited in the literature, and expert evaluations. The computational tools and atlases described in this paper will be distributed to the research community as part of the FreeSurfer image analysis package.
Collapse
Affiliation(s)
- Lilla Zöllei
- Laboratory for Computational Neuroimaging, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA.
| | - Juan Eugenio Iglesias
- Laboratory for Computational Neuroimaging, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA; Center for Medical Image Computing, University College London, United Kingdom; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| | - Yangming Ou
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, USA
| | - P Ellen Grant
- Fetal-Neonatal Neuroimaging and Developmental Science Center, Boston Children's Hospital, USA
| | - Bruce Fischl
- Laboratory for Computational Neuroimaging, Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, USA; Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, USA
| |
Collapse
|
65
|
Hu X, Guo R, Chen J, Li H, Waldmannstetter D, Zhao Y, Li B, Shi K, Menze B. Coarse-to-Fine Adversarial Networks and Zone-Based Uncertainty Analysis for NK/T-Cell Lymphoma Segmentation in CT/PET Images. IEEE J Biomed Health Inform 2020; 24:2599-2608. [DOI: 10.1109/jbhi.2020.2972694] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
66
|
Fu H, Li F, Xu Y, Liao J, Xiong J, Shen J, Liu J, Zhang X. A Retrospective Comparison of Deep Learning to Manual Annotations for Optic Disc and Optic Cup Segmentation in Fundus Photographs. Transl Vis Sci Technol 2020; 9:33. [PMID: 32832206 PMCID: PMC7414704 DOI: 10.1167/tvst.9.2.33] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 04/22/2020] [Indexed: 11/24/2022] Open
Abstract
Purpose Optic disc (OD) and optic cup (OC) segmentation are fundamental for fundus image analysis. Manual annotation is time consuming, expensive, and highly subjective, whereas an automated system is invaluable to the medical community. The aim of this study is to develop a deep learning system to segment OD and OC in fundus photographs, and evaluate how the algorithm compares against manual annotations. Methods A total of 1200 fundus photographs with 120 glaucoma cases were collected. The OD and OC annotations were labeled by seven licensed ophthalmologists, and glaucoma diagnoses were based on comprehensive evaluations of the subject medical records. A deep learning system for OD and OC segmentation was developed. The performances of segmentation and glaucoma discriminating based on the cup-to-disc ratio (CDR) of automated model were compared against the manual annotations. Results The algorithm achieved an OD dice of 0.938 (95% confidence interval [CI] = 0.934–0.941), OC dice of 0.801 (95% CI = 0.793–0.809), and CDR mean absolute error (MAE) of 0.077 (95% CI = 0.073 mean absolute error (MAE)0.082). For glaucoma discriminating based on CDR calculations, the algorithm obtained an area under receiver operator characteristic curve (AUC) of 0.948 (95% CI = 0.920 mean absolute error (MAE)0.973), with a sensitivity of 0.850 (95% CI = 0.794–0.923) and specificity of 0.853 (95% CI = 0.798–0.918). Conclusions We demonstrated the potential of the deep learning system to assist ophthalmologists in analyzing OD and OC segmentation and discriminating glaucoma from nonglaucoma subjects based on CDR calculations. Translational Relevance We investigate the segmentation of OD and OC by deep learning system compared against the manual annotations.
Collapse
Affiliation(s)
- Huazhu Fu
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Fei Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Yanwu Xu
- Intelligent Healthcare Unit, Baidu, Beijing, China
| | - Jingan Liao
- School of Computer Science and Engineering, South China University of Technology, Guangzhou, Guangdong, China
| | - Jian Xiong
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
| | - Jianbing Shen
- Inception Institute of Artificial Intelligence, Abu Dhabi, United Arab Emirates
| | - Jiang Liu
- Department of Computer Science and Engineering, Southern University of Science and Technology, Guangzhou, Guangdong, China.,Cixi Institute of Biomedical Engineering, Chinese Academy of Sciences, Ningbo, Zhejiang, China
| | - Xiulan Zhang
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangzhou, Guangdong, China
| | | |
Collapse
|
67
|
Bui TD, Wang L, Lin W, Li G, Shen D. 6-MONTH INFANT BRAIN MRI SEGMENTATION GUIDED BY 24-MONTH DATA USING CYCLE-CONSISTENT ADVERSARIAL NETWORKS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2020; 2020. [PMID: 34422223 DOI: 10.1109/isbi45749.2020.9098515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Due to the extremely low intensity contrast between the white matter (WM) and the gray matter (GM) at around 6 months of age (the isointense phase), it is difficult for manual annotation, hence the number of training labels is highly limited. Consequently, it is still challenging to automatically segment isointense infant brain MRI. Meanwhile, the contrast of intensity images in the early adult phase, such as 24 months of age, is a relatively better, which can be easily segmented by the well-developed tools, e.g., FreeSurfer. Therefore, the question is how could we employ these high-contrast images (such as 24-month-old images) to guide the segmentation of 6-month-old images. Motivated by the above purpose, we propose a method to explore the 24-month-old images for a reliable tissue segmentation of 6-month-old images. Specifically, we design a 3D-cycleGAN-Seg architecture to generate synthetic images of the isointense phase by transferring appearances between the two time-points. To guarantee the tissue segmentation consistency between 6-month-old and 24-month-old images, we employ features from generated segmentations to guide the training of the generator network. To further improve the quality of synthetic images, we propose a feature matching loss that computes the cosine distance between unpaired segmentation features of the real and fake images. Then, the transferred of 24-month-old images is used to jointly train the segmentation model on the 6-month-old images. Experimental results demonstrate a superior performance of the proposed method compared with the existing deep learning-based methods.
Collapse
Affiliation(s)
- Toan Duc Bui
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, NC 27599, USA.,Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | | |
Collapse
|
68
|
Ding Y, Acosta R, Enguix V, Suffren S, Ortmann J, Luck D, Dolz J, Lodygensky GA. Using Deep Convolutional Neural Networks for Neonatal Brain Image Segmentation. Front Neurosci 2020; 14:207. [PMID: 32273836 PMCID: PMC7114297 DOI: 10.3389/fnins.2020.00207] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 02/25/2020] [Indexed: 12/13/2022] Open
Abstract
INTRODUCTION Deep learning neural networks are especially potent at dealing with structured data, such as images and volumes. Both modified LiviaNET and HyperDense-Net performed well at a prior competition segmenting 6-month-old infant magnetic resonance images, but neonatal cerebral tissue type identification is challenging given its uniquely inverted tissue contrasts. The current study aims to evaluate the two architectures to segment neonatal brain tissue types at term equivalent age. METHODS Both networks were retrained over 24 pairs of neonatal T1 and T2 data from the Developing Human Connectome Project public data set and validated on another eight pairs against ground truth. We then reported the best-performing model from training and its performance by computing the Dice similarity coefficient (DSC) for each tissue type against eight test subjects. RESULTS During the testing phase, among the segmentation approaches tested, the dual-modality HyperDense-Net achieved the best statistically significantly test mean DSC values, obtaining 0.94/0.95/0.92 for the tissue types and took 80 h to train and 10 min to segment, including preprocessing. The single-modality LiviaNET was better at processing T2-weighted images than processing T1-weighted images across all tissue types, achieving mean DSC values of 0.90/0.90/0.88 for gray matter, white matter, and cerebrospinal fluid, respectively, while requiring 30 h to train and 8 min to segment each brain, including preprocessing. DISCUSSION Our evaluation demonstrates that both neural networks can segment neonatal brains, achieving previously reported performance. Both networks will be continuously retrained over an increasingly larger repertoire of neonatal brain data and be made available through the Canadian Neonatal Brain Platform to better serve the neonatal brain imaging research community.
Collapse
Affiliation(s)
- Yang Ding
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Rolando Acosta
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Vicente Enguix
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Sabrina Suffren
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Janosch Ortmann
- Department of Management and Technology, Université du Québec à Montréal, Montreal, QC, Canada
| | - David Luck
- The Canadian Neonatal Brain Platform (CNBP), Montreal, QC, Canada
| | - Jose Dolz
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), École de Technologie Supérieure, Montreal, QC, Canada
| | - Gregory A. Lodygensky
- Laboratory for Imagery, Vision and Artificial Intelligence (LIVIA), École de Technologie Supérieure, Montreal, QC, Canada
| |
Collapse
|
69
|
Parcellation of the neonatal cortex using Surface-based Melbourne Children's Regional Infant Brain atlases (M-CRIB-S). Sci Rep 2020; 10:4359. [PMID: 32152381 PMCID: PMC7062836 DOI: 10.1038/s41598-020-61326-2] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Accepted: 02/21/2020] [Indexed: 11/12/2022] Open
Abstract
Longitudinal studies measuring changes in cortical morphology over time are best facilitated by parcellation schemes compatible across all life stages. The Melbourne Children’s Regional Infant Brain (M-CRIB) and M-CRIB 2.0 atlases provide voxel-based parcellations of the cerebral cortex compatible with the Desikan-Killiany (DK) and the Desikan-Killiany-Tourville (DKT) cortical labelling schemes. This study introduces surface-based versions of the M-CRIB and M-CRIB 2.0 atlases, termed M-CRIB-S(DK) and M-CRIB-S(DKT), with a pipeline for automated parcellation utilizing FreeSurfer and developing Human Connectome Project (dHCP) tools. Using T2-weighted magnetic resonance images of healthy neonates (n = 58), we created average spherical templates of cortical curvature and sulcal depth. Manually labelled regions in a subset (n = 10) were encoded into the spherical template space to construct M-CRIB-S(DK) and M-CRIB-S(DKT) atlases. Labelling accuracy was assessed using Dice overlap and boundary discrepancy measures with leave-one-out cross-validation. Cross-validated labelling accuracy was high for both atlases (average regional Dice = 0.79–0.83). Worst-case boundary discrepancy instances ranged from 9.96–10.22 mm, which appeared to be driven by variability in anatomy for some cases. The M-CRIB-S atlas data and automatic pipeline allow extraction of neonatal cortical surfaces labelled according to the DK or DKT parcellation schemes.
Collapse
|
70
|
Karimi D, Salcudean SE. Reducing the Hausdorff Distance in Medical Image Segmentation With Convolutional Neural Networks. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:499-513. [PMID: 31329113 DOI: 10.1109/tmi.2019.2930068] [Citation(s) in RCA: 152] [Impact Index Per Article: 30.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The Hausdorff Distance (HD) is widely used in evaluating medical image segmentation methods. However, the existing segmentation methods do not attempt to reduce HD directly. In this paper, we present novel loss functions for training convolutional neural network (CNN)-based segmentation methods with the goal of reducing HD directly. We propose three methods to estimate HD from the segmentation probability map produced by a CNN. One method makes use of the distance transform of the segmentation boundary. Another method is based on applying morphological erosion on the difference between the true and estimated segmentation maps. The third method works by applying circular/spherical convolution kernels of different radii on the segmentation probability maps. Based on these three methods for estimating HD, we suggest three loss functions that can be used for training to reduce HD. We use these loss functions to train CNNs for segmentation of the prostate, liver, and pancreas in ultrasound, magnetic resonance, and computed tomography images and compare the results with commonly-used loss functions. Our results show that the proposed loss functions can lead to approximately 18-45% reduction in HD without degrading other segmentation performance criteria such as the Dice similarity coefficient. The proposed loss functions can be used for training medical image segmentation methods in order to reduce the large segmentation errors.
Collapse
|
71
|
Deep neural network for automatic characterization of lesions on 68Ga-PSMA-11 PET/CT. Eur J Nucl Med Mol Imaging 2019; 47:603-613. [PMID: 31813050 DOI: 10.1007/s00259-019-04606-y] [Citation(s) in RCA: 60] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2019] [Accepted: 11/07/2019] [Indexed: 12/24/2022]
Abstract
PURPOSE This study proposes an automated prostate cancer (PC) lesion characterization method based on the deep neural network to determine tumor burden on 68Ga-PSMA-11 PET/CT to potentially facilitate the optimization of PSMA-directed radionuclide therapy. METHODS We collected 68Ga-PSMA-11 PET/CT images from 193 patients with metastatic PC at three medical centers. For proof-of-concept, we focused on the detection of pelvis bone and lymph node lesions. A deep neural network (triple-combining 2.5D U-Net) was developed for the automated characterization of these lesions. The proposed method simultaneously extracts features from axial, coronal, and sagittal planes, which mimics the workflow of physicians and reduces computational and memory requirements. RESULTS Among all the labeled lesions, the network achieved 99% precision, 99% recall, and an F1 score of 99% on bone lesion detection and 94%, precision 89% recall, and an F1 score of 92% on lymph node lesion detection. The segmentation accuracy is lower than the detection. The performance of the network was correlated with the amount of training data. CONCLUSION We developed a deep neural network to characterize automatically the PC lesions on 68Ga-PSMA-11 PET/CT. The preliminary test within the pelvic area confirms the potential of deep learning methods. Increasing the amount of training data should further enhance the performance of the proposed method and may ultimately allow whole-body assessments.
Collapse
|
72
|
Deep CNN ensembles and suggestive annotations for infant brain MRI segmentation. Comput Med Imaging Graph 2019; 79:101660. [PMID: 31785402 DOI: 10.1016/j.compmedimag.2019.101660] [Citation(s) in RCA: 45] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2019] [Revised: 08/30/2019] [Accepted: 09/24/2019] [Indexed: 01/02/2023]
Abstract
Precise 3D segmentation of infant brain tissues is an essential step towards comprehensive volumetric studies and quantitative analysis of early brain development. However, computing such segmentations is very challenging, especially for 6-month infant brain, due to the poor image quality, among other difficulties inherent to infant brain MRI, e.g., the isointense contrast between white and gray matter and the severe partial volume effect due to small brain sizes. This study investigates the problem with an ensemble of semi-dense fully convolutional neural networks (CNNs), which employs T1-weighted and T2-weighted MR images as input. We demonstrate that the ensemble agreement is highly correlated with the segmentation errors. Therefore, our method provides measures that can guide local user corrections. To the best of our knowledge, this work is the first ensemble of 3D CNNs for suggesting annotations within images. Our quasi-dense architecture allows the efficient propagation of gradients during training, while limiting the number of parameters, requiring one order of magnitude less parameters than popular medical image segmentation networks such as 3D U-Net (Çiçek, et al.). We also investigated the impact that early or late fusions of multiple image modalities might have on the performances of deep architectures. We report evaluations of our method on the public data of the MICCAI iSEG-2017 Challenge on 6-month infant brain MRI segmentation, and show very competitive results among 21 teams, ranking first or second in most metrics.
Collapse
|
73
|
Bui TD, Wang L, Chen J, Lin W, Li G, Shen D. Multi-task Learning for Neonatal Brain Segmentation Using 3D Dense-Unet with Dense Attention Guided by Geodesic Distance. DOMAIN ADAPTATION AND REPRESENTATION TRANSFER AND MEDICAL IMAGE LEARNING WITH LESS LABELS AND IMPERFECT DATA : FIRST MICCAI WORKSHOP, DART 2019, AND FIRST INTERNATIONAL WORKSHOP, MIL3ID 2019, SHENZHEN, HELD IN CONJUNCTION WITH MICCAI 20... 2019; 11795:243-251. [PMID: 32090208 PMCID: PMC7034948 DOI: 10.1007/978-3-030-33391-1_28] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
The deep convolutional neural network has achieved outstanding performance on neonatal brain MRI tissue segmentation. However, it may fail to produce reasonable results on unseen datasets that have different imaging appearance distributions with the training data. The main reason is that deep learning models tend to have a good fitting to the training dataset, but do not lead to a good generalization on the unseen datasets. To address this problem, we propose a multi-task learning method, which simultaneously learns both tissue segmentation and geodesic distance regression to regularize a shared encoder network. Furthermore, a dense attention gate is explored to force the network to learn rich contextual information. By using three neonatal brain datasets with different imaging protocols from different scanners, our experimental results demonstrate superior performance of our proposed method over the existing deep learning-based methods on the unseen datasets.
Collapse
Affiliation(s)
- Toan Duc Bui
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Li Wang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Jian Chen
- School of Information Science and Engineering, Fujian University of Technology, Fuzhou 350118, China
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Gang Li
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
74
|
Bui TD, Shin J, Moon T. Skip-connected 3D DenseNet for volumetric infant brain MRI segmentation. Biomed Signal Process Control 2019. [DOI: 10.1016/j.bspc.2019.101613] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
75
|
Zhou T, Ruan S, Canu S. A review: Deep learning for medical image segmentation using multi-modality fusion. ARRAY 2019. [DOI: 10.1016/j.array.2019.100004] [Citation(s) in RCA: 198] [Impact Index Per Article: 33.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
|
76
|
Yi X, Walia E, Babyn P. Generative adversarial network in medical imaging: A review. Med Image Anal 2019; 58:101552. [PMID: 31521965 DOI: 10.1016/j.media.2019.101552] [Citation(s) in RCA: 597] [Impact Index Per Article: 99.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2018] [Revised: 08/23/2019] [Accepted: 08/30/2019] [Indexed: 01/30/2023]
Abstract
Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into training and imposing higher order consistency. This has proven to be useful in many cases, such as domain adaptation, data augmentation, and image-to-image translation. These properties have attracted researchers in the medical imaging community, and we have seen rapid adoption in many traditional and novel applications, such as image reconstruction, segmentation, detection, classification, and cross-modality synthesis. Based on our observations, this trend will continue and we therefore conducted a review of recent advances in medical imaging using the adversarial training scheme with the hope of benefiting researchers interested in this technique.
Collapse
Affiliation(s)
- Xin Yi
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada.
| | - Ekta Walia
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada; Philips Canada, 281 Hillmount Road, Markham, Ontario, ON L6C 2S3, Canada.
| | - Paul Babyn
- Department of Medical Imaging, University of Saskatchewan, 103 Hospital Dr, Saskatoon, SK S7N 0W8, Canada.
| |
Collapse
|
77
|
Huang C, Tian J, Yuan C, Zeng P, He X, Chen H, Huang Y, Huang B. Fully Automated Segmentation of Lower Extremity Deep Vein Thrombosis Using Convolutional Neural Network. BIOMED RESEARCH INTERNATIONAL 2019; 2019:3401683. [PMID: 31281832 PMCID: PMC6590596 DOI: 10.1155/2019/3401683] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 05/07/2019] [Accepted: 05/26/2019] [Indexed: 11/17/2022]
Abstract
OBJECTIVE Deep vein thrombosis (DVT) is a disease caused by abnormal blood clots in deep veins. Accurate segmentation of DVT is important to facilitate the diagnosis and treatment. In the current study, we proposed a fully automatic method of DVT delineation based on deep learning (DL) and contrast enhanced magnetic resonance imaging (CE-MRI) images. METHODS 58 patients (25 males; 28~96 years old) with newly diagnosed lower extremity DVT were recruited. CE-MRI was acquired on a 1.5 T system. The ground truth (GT) of DVT lesions was manually contoured. A DL network with an encoder-decoder architecture was designed for DVT segmentation. 8-Fold cross-validation strategy was applied for training and testing. Dice similarity coefficient (DSC) was adopted to evaluate the network's performance. RESULTS It took about 1.5s for our CNN model to perform the segmentation task in a slice of MRI image. The mean DSC of 58 patients was 0.74± 0.17 and the median DSC was 0.79. Compared with other DL models, our CNN model achieved better performance in DVT segmentation (0.74± 0.17 versus 0.66±0.15, 0.55±0.20, and 0.57±0.22). CONCLUSION Our proposed DL method was effective and fast for fully automatic segmentation of lower extremity DVT.
Collapse
Affiliation(s)
- Chen Huang
- Department of Radiology, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Junru Tian
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University Clinical Research Center for Neurological Diseases, Shenzhen, China
| | - Chenglang Yuan
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University Clinical Research Center for Neurological Diseases, Shenzhen, China
| | - Ping Zeng
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University Clinical Research Center for Neurological Diseases, Shenzhen, China
| | - Xueping He
- Department of Radiology, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Hanwei Chen
- Department of Radiology, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Yi Huang
- Department of Radiology, Guangzhou Panyu Central Hospital, Guangzhou, China
- Medical Imaging Institute of Panyu, Guangzhou, China
| | - Bingsheng Huang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
- Shenzhen University Clinical Research Center for Neurological Diseases, Shenzhen, China
| |
Collapse
|