1
|
Xiao J, Zheng W, Wang W, Xia Q, Yan Z, Guo Q, Wang X, Nie S, Zhang S. Slice2Mesh: 3D Surface Reconstruction From Sparse Slices of Images for the Left Ventricle. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:1541-1555. [PMID: 40030525 DOI: 10.1109/tmi.2024.3514869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Cine MRI is a widely used technique to evaluate left ventricular function and motion, as it captures temporal information. However, due to the limited spatial resolution, cine MRI only provides a few sparse scans at regular positions and orientations, which poses challenges for reconstructing dense 3D cardiac structures, which is essential for better understanding the cardiac structure and motion in a dynamic 3D manner. In this study, we propose a novel learning-based 3D cardiac surface reconstruction method, Slice2Mesh, which directly predicts accurate and high-fidelity 3D meshes from sparse slices of cine MRI images under partial supervision of sparse contour points. Slice2Mesh leverages a 2D UNet to extract image features and a graph convolutional network to predict deformations from an initial template to various 3D surfaces, which enables it to produce topology-consistent meshes that can better characterize and analyze cardiac movement. We also introduce As Rigid As Possible energy in the deformation loss to preserve the intrinsic structure of the predefined template and produce realistic left ventricular shapes. We evaluated our method on 150 clinical test samples and achieved an average chamfer distance of 3.621 mm, outperforming traditional methods by approximately 2.5 mm. We also applied our method to produce 4D surface meshes from cine MRI sequences and utilized a simple SVM model on these 4D heart meshes to identify subjects with myocardial infarction, and achieved a classification sensitivity of 91.8% on 99 test subjects, including 49 abnormal patients, which implies great potential of our method for clinical use.
Collapse
|
2
|
Yalcinkaya DM, Youssef K, Heydari B, Simonetti O, Dharmakumar R, Raman S, Sharif B. Temporal Uncertainty Localization to Enable Human-in-the-Loop Analysis of Dynamic Contrast-Enhanced Cardiac MRI Datasets. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2023; 14222:453-462. [PMID: 38204763 PMCID: PMC10775176 DOI: 10.1007/978-3-031-43898-1_44] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2024]
Abstract
Dynamic contrast-enhanced (DCE) cardiac magnetic resonance imaging (CMRI) is a widely used modality for diagnosing myocardial blood flow (perfusion) abnormalities. During a typical free-breathing DCE-CMRI scan, close to 300 time-resolved images of myocardial perfusion are acquired at various contrast "wash in/out" phases. Manual segmentation of myocardial contours in each time-frame of a DCE image series can be tedious and time-consuming, particularly when non-rigid motion correction has failed or is unavailable. While deep neural networks (DNNs) have shown promise for analyzing DCE-CMRI datasets, a "dynamic quality control" (dQC) technique for reliably detecting failed segmentations is lacking. Here we propose a new space-time uncertainty metric as a dQC tool for DNN-based segmentation of free-breathing DCE-CMRI datasets by validating the proposed metric on an external dataset and establishing a human-in-the-loop framework to improve the segmentation results. In the proposed approach, we referred the top 10% most uncertain segmentations as detected by our dQC tool to the human expert for refinement. This approach resulted in a significant increase in the Dice score (p < 0.001) and a notable decrease in the number of images with failed segmentation (16.2% to 11.3%) whereas the alternative approach of randomly selecting the same number of segmentations for human referral did not achieve any significant improvement. Our results suggest that the proposed dQC framework has the potential to accurately identify poor-quality segmentations and may enable efficient DNN-based analysis of DCE-CMRI in a human-in-the-loop pipeline for clinical interpretation and reporting of dynamic CMRI datasets.
Collapse
Affiliation(s)
- Dilek M Yalcinkaya
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, USA
- Elmore Family School of Electrical and Computer Engineering, Purdue University, West Lafayette, IN, USA
| | - Khalid Youssef
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, USA
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
| | - Bobak Heydari
- Stephenson Cardiac Imaging Centre, University of Calgary, Alberta, Canada
| | - Orlando Simonetti
- Department of Internal Medicine, Division of Cardiovascular Medicine, Davis Heart and Lung Research Institute, The Ohio State University, Columbus, OH, USA
| | - Rohan Dharmakumar
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Subha Raman
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Behzad Sharif
- Laboratory for Translational Imaging of Microcirculation, Indiana University School of Medicine (IUSM), Indianapolis, IN, USA
- Krannert Cardiovascular Research Center, IUSM/IU Health Cardiovascular Institute, Indianapolis, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
3
|
Li L, Ding W, Huang L, Zhuang X, Grau V. Multi-modality cardiac image computing: A survey. Med Image Anal 2023; 88:102869. [PMID: 37384950 DOI: 10.1016/j.media.2023.102869] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 05/01/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
Multi-modality cardiac imaging plays a key role in the management of patients with cardiovascular diseases. It allows a combination of complementary anatomical, morphological and functional information, increases diagnosis accuracy, and improves the efficacy of cardiovascular interventions and clinical outcomes. Fully-automated processing and quantitative analysis of multi-modality cardiac images could have a direct impact on clinical research and evidence-based patient management. However, these require overcoming significant challenges including inter-modality misalignment and finding optimal methods to integrate information from different modalities. This paper aims to provide a comprehensive review of multi-modality imaging in cardiology, the computing methods, the validation strategies, the related clinical workflows and future perspectives. For the computing methodologies, we have a favored focus on the three tasks, i.e., registration, fusion and segmentation, which generally involve multi-modality imaging data, either combining information from different modalities or transferring information across modalities. The review highlights that multi-modality cardiac imaging data has the potential of wide applicability in the clinic, such as trans-aortic valve implantation guidance, myocardial viability assessment, and catheter ablation therapy and its patient selection. Nevertheless, many challenges remain unsolved, such as missing modality, modality selection, combination of imaging and non-imaging data, and uniform analysis and representation of different modalities. There is also work to do in defining how the well-developed techniques fit in clinical workflows and how much additional and relevant information they introduce. These problems are likely to continue to be an active field of research and the questions to be answered in the future.
Collapse
Affiliation(s)
- Lei Li
- Department of Engineering Science, University of Oxford, Oxford, UK.
| | - Wangbin Ding
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Liqin Huang
- College of Physics and Information Engineering, Fuzhou University, Fuzhou, China
| | - Xiahai Zhuang
- School of Data Science, Fudan University, Shanghai, China
| | - Vicente Grau
- Department of Engineering Science, University of Oxford, Oxford, UK
| |
Collapse
|
4
|
Yang Y, Shah Z, Jacob AJ, Hair J, Chitiboi T, Passerini T, Yerly J, Di Sopra L, Piccini D, Hosseini Z, Sharma P, Sahu A, Stuber M, Oshinski JN. Deep learning-based left ventricular segmentation demonstrates improved performance on respiratory motion-resolved whole-heart reconstructions. FRONTIERS IN RADIOLOGY 2023; 3:1144004. [PMID: 37492382 PMCID: PMC10365088 DOI: 10.3389/fradi.2023.1144004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 05/17/2023] [Indexed: 07/27/2023]
Abstract
Introduction Deep learning (DL)-based segmentation has gained popularity for routine cardiac magnetic resonance (CMR) image analysis and in particular, delineation of left ventricular (LV) borders for LV volume determination. Free-breathing, self-navigated, whole-heart CMR exams provide high-resolution, isotropic coverage of the heart for assessment of cardiac anatomy including LV volume. The combination of whole-heart free-breathing CMR and DL-based LV segmentation has the potential to streamline the acquisition and analysis of clinical CMR exams. The purpose of this study was to compare the performance of a DL-based automatic LV segmentation network trained primarily on computed tomography (CT) images in two whole-heart CMR reconstruction methods: (1) an in-line respiratory motion-corrected (Mcorr) reconstruction and (2) an off-line, compressed sensing-based, multi-volume respiratory motion-resolved (Mres) reconstruction. Given that Mres images were shown to have greater image quality in previous studies than Mcorr images, we hypothesized that the LV volumes segmented from Mres images are closer to the manual expert-traced left ventricular endocardial border than the Mcorr images. Method This retrospective study used 15 patients who underwent clinically indicated 1.5 T CMR exams with a prototype ECG-gated 3D radial phyllotaxis balanced steady state free precession (bSSFP) sequence. For each reconstruction method, the absolute volume difference (AVD) of the automatically and manually segmented LV volumes was used as the primary quantity to investigate whether 3D DL-based LV segmentation generalized better on Mcorr or Mres 3D whole-heart images. Additionally, we assessed the 3D Dice similarity coefficient between the manual and automatic LV masks of each reconstructed 3D whole-heart image and the sharpness of the LV myocardium-blood pool interface. A two-tail paired Student's t-test (alpha = 0.05) was used to test the significance in this study. Results & Discussion The AVD in the respiratory Mres reconstruction was lower than the AVD in the respiratory Mcorr reconstruction: 7.73 ± 6.54 ml vs. 20.0 ± 22.4 ml, respectively (n = 15, p-value = 0.03). The 3D Dice coefficient between the DL-segmented masks and the manually segmented masks was higher for Mres images than for Mcorr images: 0.90 ± 0.02 vs. 0.87 ± 0.03 respectively, with a p-value = 0.02. Sharpness on Mres images was higher than on Mcorr images: 0.15 ± 0.05 vs. 0.12 ± 0.04, respectively, with a p-value of 0.014 (n = 15). Conclusion We conclude that the DL-based 3D automatic LV segmentation network trained on CT images and fine-tuned on MR images generalized better on Mres images than on Mcorr images for quantifying LV volumes.
Collapse
Affiliation(s)
- Yitong Yang
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and the Georgia Institute of Technology, Atlanta, GA, United States
| | - Zahraw Shah
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and the Georgia Institute of Technology, Atlanta, GA, United States
| | - Athira J. Jacob
- Digital Technology and Innovation, Siemens Medical Solutions USA, Princeton, NJ, United States
| | - Jackson Hair
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and the Georgia Institute of Technology, Atlanta, GA, United States
| | - Teodora Chitiboi
- Digital Technology and Innovation, Siemens Medical Solutions USA, Princeton, NJ, United States
| | - Tiziano Passerini
- Digital Technology and Innovation, Siemens Medical Solutions USA, Princeton, NJ, United States
| | - Jerome Yerly
- Diagnostic and Interventional Radiology, Lausanne University Hospital, Lausanne, Switzerland
| | - Lorenzo Di Sopra
- Diagnostic and Interventional Radiology, Lausanne University Hospital, Lausanne, Switzerland
| | - Davide Piccini
- Advanced Clinical Imaging Technology, Siemens Healthcare AG, Lausanne, Switzerland
| | - Zahra Hosseini
- MR R&D Collaboration, Siemens Medical Solutions USA, Atlanta, GA, United States
| | - Puneet Sharma
- Digital Technology and Innovation, Siemens Medical Solutions USA, Princeton, NJ, United States
| | - Anurag Sahu
- MR R&D Collaboration, Siemens Medical Solutions USA, Atlanta, GA, United States
| | - Matthias Stuber
- Diagnostic and Interventional Radiology, Lausanne University Hospital, Lausanne, Switzerland
| | - John N. Oshinski
- Wallace H. Coulter Department of Biomedical Engineering, Emory University and the Georgia Institute of Technology, Atlanta, GA, United States
- Department of Radiology & Imaging Science, Emory University School of Medicine, Atlanta, GA, United States
| |
Collapse
|
5
|
Ribeiro MAO, Nunes FLS. Left ventricle segmentation combining deep learning and deformable models with anatomical constraints. J Biomed Inform 2023; 142:104366. [PMID: 37086958 DOI: 10.1016/j.jbi.2023.104366] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Revised: 03/19/2023] [Accepted: 04/17/2023] [Indexed: 04/24/2023]
Abstract
Segmentation of the left ventricle is a key approach in Cardiac Magnetic Resonance Imaging for calculating biomarkers in diagnosis. Since there is substantial effort required from the expert, many automatic segmentation methods have been proposed, in which deep learning networks have obtained remarkable performance. However, one of the main limitations of these approaches is the production of segmentations that contain anatomical errors. To avoid this limitation, we propose a new fully-automatic left ventricle segmentation method combining deep learning and deformable models. We propose a new level set energy formulation that includes exam-specific information estimated from the deep learning segmentation and shape constraints. The method is part of a pipeline containing pre-processing steps and a failure correction post-processing step. Experiments were conducted with the Sunnybrook and ACDC public datasets, and a private dataset. Results suggest that the method is competitive, that it can produce anatomically consistent segmentations, has good generalization ability, and is often able to estimate biomarkers close to the expert.
Collapse
Affiliation(s)
- Matheus A O Ribeiro
- University of São Paulo, Rua Arlindo Bettio, 1000, Vila Guaraciaba, São Paulo, 01000-000, São Paulo, Brazil.
| | - Fátima L S Nunes
- University of São Paulo, Rua Arlindo Bettio, 1000, Vila Guaraciaba, São Paulo, 01000-000, São Paulo, Brazil.
| |
Collapse
|
6
|
Gill SK, Karwath A, Uh HW, Cardoso VR, Gu Z, Barsky A, Slater L, Acharjee A, Duan J, Dall'Olio L, el Bouhaddani S, Chernbumroong S, Stanbury M, Haynes S, Asselbergs FW, Grobbee DE, Eijkemans MJC, Gkoutos GV, Kotecha D. Artificial intelligence to enhance clinical value across the spectrum of cardiovascular healthcare. Eur Heart J 2023; 44:713-725. [PMID: 36629285 PMCID: PMC9976986 DOI: 10.1093/eurheartj/ehac758] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 11/22/2022] [Accepted: 12/05/2022] [Indexed: 01/12/2023] Open
Abstract
Artificial intelligence (AI) is increasingly being utilized in healthcare. This article provides clinicians and researchers with a step-wise foundation for high-value AI that can be applied to a variety of different data modalities. The aim is to improve the transparency and application of AI methods, with the potential to benefit patients in routine cardiovascular care. Following a clear research hypothesis, an AI-based workflow begins with data selection and pre-processing prior to analysis, with the type of data (structured, semi-structured, or unstructured) determining what type of pre-processing steps and machine-learning algorithms are required. Algorithmic and data validation should be performed to ensure the robustness of the chosen methodology, followed by an objective evaluation of performance. Seven case studies are provided to highlight the wide variety of data modalities and clinical questions that can benefit from modern AI techniques, with a focus on applying them to cardiovascular disease management. Despite the growing use of AI, further education for healthcare workers, researchers, and the public are needed to aid understanding of how AI works and to close the existing gap in knowledge. In addition, issues regarding data access, sharing, and security must be addressed to ensure full engagement by patients and the public. The application of AI within healthcare provides an opportunity for clinicians to deliver a more personalized approach to medical care by accounting for confounders, interactions, and the rising prevalence of multi-morbidity.
Collapse
Affiliation(s)
- Simrat K Gill
- Institute of Cardiovascular Sciences, University of Birmingham, Vincent Drive, B15 2TT Birmingham, UK
- Health Data Research UK Midlands, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
| | - Andreas Karwath
- Health Data Research UK Midlands, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Vincent Drive, B15 2TT Birmingham, UK
| | - Hae-Won Uh
- Julius Center for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Victor Roth Cardoso
- Institute of Cardiovascular Sciences, University of Birmingham, Vincent Drive, B15 2TT Birmingham, UK
- Health Data Research UK Midlands, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Vincent Drive, B15 2TT Birmingham, UK
| | - Zhujie Gu
- Julius Center for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Andrey Barsky
- Health Data Research UK Midlands, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Vincent Drive, B15 2TT Birmingham, UK
| | - Luke Slater
- Health Data Research UK Midlands, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Vincent Drive, B15 2TT Birmingham, UK
| | - Animesh Acharjee
- Health Data Research UK Midlands, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Vincent Drive, B15 2TT Birmingham, UK
| | - Jinming Duan
- School of Computer Science, University of Birmingham, Birmingham, UK
- Alan Turing Institute, London, UK
| | - Lorenzo Dall'Olio
- Department of Physics and Astronomy, University of Bologna, Bologna, Italy
| | - Said el Bouhaddani
- Julius Center for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Saisakul Chernbumroong
- Health Data Research UK Midlands, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Vincent Drive, B15 2TT Birmingham, UK
| | | | | | - Folkert W Asselbergs
- Amsterdam University Medical Center, Department of Cardiology, University of Amsterdam, Amsterdam, The Netherlands
- Health Data Research UK and Institute of Health Informatics, University College London, London, UK
| | - Diederick E Grobbee
- Julius Center for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Marinus J C Eijkemans
- Julius Center for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht, The Netherlands
| | - Georgios V Gkoutos
- Health Data Research UK Midlands, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Institute of Cancer and Genomic Sciences, University of Birmingham, Vincent Drive, B15 2TT Birmingham, UK
| | - Dipak Kotecha
- Institute of Cardiovascular Sciences, University of Birmingham, Vincent Drive, B15 2TT Birmingham, UK
- Health Data Research UK Midlands, University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK
- Department of Cardiology, Division Heart and Lungs, University Medical Center Utrecht, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
7
|
Su C, Ma J, Zhou Y, Li P, Tang Z. Res-DUnet: A small-region attentioned model for cardiac MRI-based right ventricular segmentation. Appl Soft Comput 2023. [DOI: 10.1016/j.asoc.2023.110060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
|
8
|
The Use of Digital Coronary Phantoms for the Validation of Arterial Geometry Reconstruction and Computation of Virtual FFR. FLUIDS 2022. [DOI: 10.3390/fluids7060201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
We present computational fluid dynamics (CFD) results of virtual fractional flow reserve (vFFR) calculations, performed on reconstructed arterial geometries derived from a digital phantom (DP). The latter provides a convenient and parsimonious description of the main vessels of the left and right coronary arterial trees, which, crucially, is CFD-compatible. Using our DP, we investigate the reconstruction error in what we deem to be the most relevant way—by evaluating the change in the computed value of vFFR, which results from varying (within representative clinical bounds) the selection of the virtual angiogram pair (defined by their viewing angles) used to segment the artery, the eccentricity and severity of the stenosis, and thereby, the CFD simulation’s luminal boundary. The DP is used to quantify reconstruction and computed haemodynamic error within the VIRTUheartTM software suite. However, our method and the associated digital phantom tool are readily transferable to equivalent, clinically oriented workflows. While we are able to conclude that error within the VIRTUheartTM workflow is suitably controlled, the principal outcomes of the work reported here are the demonstration and provision of a practical tool along with an exemplar methodology for evaluating error in a coronary segmentation process.
Collapse
|
9
|
Huang K, Xu L, Zhu Y, Meng P. A U-snake based deep learning network for right ventricle segmentation. Med Phys 2022; 49:3900-3913. [PMID: 35302251 DOI: 10.1002/mp.15613] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 02/11/2022] [Accepted: 03/04/2022] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Ventricular segmentation is of great importance for the heart condition monitoring. However, manual segmentation is time-consuming, cumbersome and subjective. Many segmentation methods perform poorly due to the complex structure and uncertain shape of the right ventricle, so we combine deep learning to achieve automatic segmentation. METHOD This paper proposed a method named U-Snake network which is based on the improvement of deep snake5 together with level set8 to segment the right ventricular in the MR images. U-snake aggregates the information of each receptive field which is learned by circular convolution of multiple different dilation rates. At the same time, we also added dice loss functions and transferred the result of U-Snake to the level set so as to further enhance the effect of small object segmentation. our method is tested on the test1 and test2 datasets in the Right Ventricular Segmentation Challenge, which shows the effectiveness. RESULTS The experiment showed that we have obtained good result in the right ventricle segmentation challenge(RVSC). The highest segmentation accuracy on the right ventricular test set 2 reached a dice coefficient of 0.911, and the segmentation speed reached 5fps. CONCLUSIONS Our method, a new deep learning network named U-snake, has surpassed the previous excellent ventricular segmentation method based on mathematical theory and other classical deep learning methods, such as Residual U-net27 , Inception cnn33 , Dilated cnn29 , etc. However, it can only be used as an auxiliary tool instead of replacing the work of human beings. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Kaiwen Huang
- The School of Optical-Electrical & Computer Engineering, University of Shanghai for Science & Technology, Shanghai, 200093, China
| | - Lei Xu
- The School of Optical-Electrical & Computer Engineering, University of Shanghai for Science & Technology, Shanghai, 200093, China
| | - Yingliang Zhu
- The School of Optical-Electrical & Computer Engineering, University of Shanghai for Science & Technology, Shanghai, 200093, China
| | - Penghui Meng
- The School of Optical-Electrical & Computer Engineering, University of Shanghai for Science & Technology, Shanghai, 200093, China
| |
Collapse
|
10
|
Visual and structural feature combination in an interactive machine learning system for medical image segmentation. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2022.100294] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
11
|
Ding W, Li L, Zhuang X, Huang L. Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks. IEEE J Biomed Health Inform 2022; 26:3104-3115. [PMID: 35130178 DOI: 10.1109/jbhi.2022.3149114] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image; and the transformed atlas labels can be combined to generate target segmentation via label fusion schemes. Many conventional MAS methods employed the atlases from the same modality as the target image. However, the number of atlases with the same modality may be limited or even missing in many clinical applications. Besides, conventional MAS methods suffer from the computational burden of registration or label fusion procedures. In this work, we design a novel cross-modality MAS framework, which uses available atlases from a certain modality to segment a target image from another modality. To boost the computational efficiency of the framework, both the image registration and label fusion are achieved by well-designed deep neural networks. For the atlas-to-target image registration, we propose a bi-directional registration network (BiRegNet), which can efficiently align images from different modalities. For the label fusion, we design a similarity estimation network (SimNet), which estimates the fusion weight of each atlas by measuring its similarity to the target image. SimNet can learn multi-scale information for similarity estimation to improve the performance of label fusion. The proposed framework was evaluated by the left ventricle and liver segmentation tasks on the MM-WHS and CHAOS datasets, respectively. Results have shown that the framework is effective for cross-modality MAS in both registration and label fusion. The code will be released publicly on https://github.com/NanYoMy/cmmas once the manuscript is accepted.
Collapse
|
12
|
Ciyamala Kushbu S, Inbamalar TM. Making Semi-Automatic Segmentation Method to be Automatic Using Deep Learning for Biventricular Segmentation. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2022. [DOI: 10.1166/jmihi.2022.3927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Ventricular Segmentation or Delineation of Cardiac Magnetic Resonance Imaging (CMRI) is significant in obtaining the cardiac contractile function, which in turn is taken as input for diagnosing Cardio Vascular Diseases (CVD). Many automatic and semi-automatic methods were evolved to
meet the constraints of diagnosing CVDs. Among these, semi-automatic methods require user intervention for delineation of ventricles, which consumes time and leads to intra and inter-observability, as with manual delineation. Thus, the automatic method is suggested by most of the researchers
to address the above-stated problem. We proposed Saliency-based Active contour U-Net (SACU-Net) for automatic bi-ventricular segmentation which is found to surpass the existing highest developed methods regarding closeness to the gold standard. Three schemes are used by our proposed algorithm,
namely 1. Saliency Detection Scheme for Region of Interest (ROI) Localization to concentrate only on Object of Interest, 2. Drop-out embedded U-net for Initial Contour evolution that performs initial segmentation and 3. Local-Global-based Regional active Contour (LGRAC) to fine-tune and avoid
leaking, merging of ventricles during Delineation. We used three datasets namely Automatic Cardiac Diagnosing Challenge (ACDC) of MICCAI 2017, Right Ventricular Segmentation Challenge (RVSC) of MICCAI 2012, and Sunny Brook (SB) of MICCAI 2009 dataset to test the adaptability nature of our
algorithm over different scanner resolutions and protocols. 100 and 50 CMRI Images of ACDC were used for training and testing respectively which obtained average Dice Coefficient (DC) metric of 0.963, 0.934, and 0.948 for Left Ventricular Cavity (LVC), Left Ventricular Myocardium (LVM), and
Right Ventricular Cavity (RVC) respectively. 32 and 16 CMRI Images of RVSC are used for preparing and experimenting respectively, which obtained an average DC metric of 0.95 for RVC.30 and 15 CMRI Images of SB are used for preparing and experimenting respectively, which obtained average DC
metric of 0.96 and 0.97 for LVC and LVM, respectively. Hausdorff Distance (HD) Metrics are also calculated to learn the distance of proposed delineated ventricles to reach the gold standard. The above resultant metrics show the robustness of our proposed SACU-Net in the segmentation of ventricles
of CMRI than previous methods.
Collapse
Affiliation(s)
- S. Ciyamala Kushbu
- Department of Information and Communication Engineering, Anna University, Chennai 25, Tamilnadu, India
| | - T. M. Inbamalar
- Department of Electronics and Communication Engineering, R.M.K. Engineering College, Tiruvallur 601206, Tamilnadu, India
| |
Collapse
|
13
|
Li J, Udupa JK, Odhner D, Tong Y, Torigian DA. SOMA: Subject-, object-, and modality-adapted precision atlas approach for automatic anatomy recognition and delineation in medical images. Med Phys 2021; 48:7806-7825. [PMID: 34668207 PMCID: PMC8678400 DOI: 10.1002/mp.15308] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 09/12/2021] [Accepted: 09/29/2021] [Indexed: 11/06/2022] Open
Abstract
PURPOSE In the multi-atlas segmentation (MAS) method, a large enough atlas set, which can cover the complete spectrum of the whole population pattern of the target object will benefit the segmentation quality. However, the difficulty in obtaining and generating such a large set of atlases and the computational burden required in the segmentation procedure make this approach impractical. In this paper, we propose a method called SOMA to select subject-, object-, and modality-adapted precision atlases for automatic anatomy recognition in medical images with pathology, following the idea that different regions of the target object in a novel image can be recognized by different atlases with regionally best similarity, so that effective atlases have no need to be globally similar to the target subject and also have no need to be overall similar to the target object. METHODS The SOMA method consists of three main components: atlas building, object recognition, and object delineation. Considering the computational complexity, we utilize an all-to-template strategy to align all images to the same image space belonging to the root image determined by the minimum spanning tree (MST) strategy among a subset of radiologically near-normal images. The object recognition process is composed of two stages: rough recognition and refined recognition. In rough recognition, subimage matching is conducted between the test image and each image of the whole atlas set, and only the atlas corresponding to the best-matched subimage contributes to the recognition map regionally. The frequency of best match for each atlas is recorded by a counter, and the atlases with the highest frequencies are selected as the precision atlases. In refined recognition, only the precision atlases are examined, and the subimage matching is conducted in a nonlocal manner of searching to further increase the accuracy of boundary matching. Delineation is based on a U-net-based deep learning network, where the original gray scale image together with the fuzzy map from refined recognition compose a two-channel input to the network, and the output is a segmentation map of the target object. RESULTS Experiments are conducted on computed tomography (CT) images with different qualities in two body regions - head and neck (H&N) and thorax, from 298 subjects with nine objects and 241 subjects with six objects, respectively. Most objects achieve a localization error within two voxels after refined recognition, with marked improvement in localization accuracy from rough to refined recognition of 0.6-3 mm in H&N and 0.8-4.9 mm in thorax, and also in delineation accuracy (Dice coefficient) from refined recognition to delineation of 0.01-0.11 in H&N and 0.01-0.18 in thorax. CONCLUSIONS The SOMA method shows high accuracy and robustness in anatomy recognition and delineation. The improvements from rough to refined recognition and further to delineation, as well as immunity of recognition accuracy to varying image and object qualities, demonstrate the core principles of SOMA where segmentation accuracy increases with precision atlases and gradually refined object matching.
Collapse
Affiliation(s)
- Jieyu Li
- Institute of Image Processing and Pattern Recognition, Department of Automation, Shanghai Jiao Tong University, Shanghai, China
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Jayaram K. Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Drew A. Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| |
Collapse
|
14
|
Anatomical knowledge based level set segmentation of cardiac ventricles from MRI. Magn Reson Imaging 2021; 86:135-148. [PMID: 34710558 DOI: 10.1016/j.mri.2021.10.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Revised: 10/02/2021] [Accepted: 10/10/2021] [Indexed: 11/23/2022]
Abstract
This paper represents a novel level set framework for segmentation of cardiac left ventricle (LV) and right ventricle (RV) from magnetic resonance images based on anatomical structures of the heart. We first propose a level set approach to recover the endocardium and epicardium of LV by using a bi-layer level set (BILLS) formulation, in which the endocardium and epicardium are represented by the 0-level set and k-level set of a level set function. Furthermore, the recovery of LV endocardium and epicardium is achieved by a level set evolution process, called convexity preserving bi-layer level set (CP-BILLS). During the CP-BILLS evolution, the 0-level set and k-level set simultaneously evolve and move toward the true endocardium and epicardium under the guidance of image information and the impact of the convexity preserving mechanism as well. To eliminate the manual selection of the k-level, we develop an algorithm for automatic selection of an optimal k-level. As a result, the obtained endocardial and epicardial contours are convex and consistent with the anatomy of cardiac ventricles. For segmentation of the whole ventricle, we extend this method to the segmentation of RV and myocardium of both left and right ventricles by using a convex shape decomposition (CSD) structure of cardiac ventricles based on anatomical knowledge. Experimental results demonstrate promising performance of our method. Compared with some traditional methods, our method exhibits superior performance in terms of segmentation accuracy and algorithm stability. Our method is comparable with the state-of-the-art deep learning-based method in terms of segmentation accuracy and algorithm stability, but our method has no need for training and the manual segmentation of the training data.
Collapse
|
15
|
Kaur J, Kaur P. Outbreak COVID-19 in Medical Image Processing Using Deep Learning: A State-of-the-Art Review. ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING : STATE OF THE ART REVIEWS 2021; 29:2351-2382. [PMID: 34690493 PMCID: PMC8525064 DOI: 10.1007/s11831-021-09667-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/11/2021] [Accepted: 10/01/2021] [Indexed: 06/13/2023]
Abstract
From the month of December-19, the outbreak of Coronavirus (COVID-19) triggered several deaths and overstated every aspect of individual health. COVID-19 has been designated as a pandemic by World Health Organization. The circumstances placed serious trouble on every country worldwide, particularly with health arrangements and time-consuming responses. The increase in the positive cases of COVID-19 globally spread every day. The quantity of accessible diagnosing kits is restricted because of complications in detecting the existence of the illness. Fast and correct diagnosis of COVID-19 is a timely requirement for the prevention and controlling of the pandemic through suitable isolation and medicinal treatment. The significance of the present work is to discuss the outline of the deep learning techniques with medical imaging such as outburst prediction, virus transmitted indications, detection and treatment aspects, vaccine availability with remedy research. Abundant image resources of medical imaging as X-rays, Computed Tomography Scans, Magnetic Resonance imaging, formulate deep learning high-quality methods to fight against the pandemic COVID-19. The review presents a comprehensive idea of deep learning and its related applications in healthcare received over the past decade. At the last, some issues and confrontations to control the health crisis and outbreaks have been introduced. The progress in technology has contributed to developing individual's lives. The problems faced by the radiologists during medical imaging techniques and deep learning approaches for diagnosing the COVID-19 infections have been also discussed.
Collapse
Affiliation(s)
- Jaspreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab India
| | - Prabhpreet Kaur
- Department of Computer Engineering & Technology, Guru Nanak Dev University, Amritsar, Punjab India
| |
Collapse
|
16
|
Mamalakis M, Garg P, Nelson T, Lee J, Wild JM, Clayton RH. MA-SOCRATIS: An automatic pipeline for robust segmentation of the left ventricle and scar. Comput Med Imaging Graph 2021; 93:101982. [PMID: 34481237 DOI: 10.1016/j.compmedimag.2021.101982] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Revised: 08/19/2021] [Accepted: 08/23/2021] [Indexed: 11/18/2022]
Abstract
Multi-atlas segmentation of cardiac regions and total infarct scar (MA-SOCRATIS) is an unsupervised automatic pipeline to segment left ventricular myocardium and scar from late gadolinium enhanced MR images (LGE-MRI) of the heart. We implement two different pipelines for myocardial and scar segmentation from short axis LGE-MRI. Myocardial segmentation has two steps; initial segmentation and re-estimation. The initial segmentation step makes a first estimate of myocardium boundaries by using multi-atlas segmentation techniques. The re-estimation step refines the myocardial segmentation by a combination of k-means clustering and a geometric median shape variation technique. An active contour technique determines the unhealthy and healthy myocardial wall. The scar segmentation pipeline is a combination of a Rician-Gaussian mixture model and full width at half maximum (FWHM) thresholding, to determine the intensity pixels in scar regions. Following this step a watershed method with an automatic seed-points framework segments the final scar region. MA-SOCRATIS was evaluated using two different datasets. In both datasets ground truths were based on manual segmentation of short axis images from LGE-MRI scans. The first dataset included 40 patients from the MS-CMRSeg 2019 challenge dataset (STACOM at MICCAI 2019). The second is a collection of 20 patients with scar regions that are challenging to segment. MA-SOCRATIS achieved robust and accurate performance in automatic segmentation of myocardium and scar regions without the need of training or tuning in both cohorts, compared with state-of-the-art techniques (intra-observer and inter observer myocardium segmentation: 81.9% and 70% average Dice value, and scar (intra-observer and inter observer segmentation: 70.5% and 70.5% average Dice value).
Collapse
Affiliation(s)
- Michail Mamalakis
- Insigneo Institute for In-Silico Medicine, University of Sheffield, Sheffield, UK; Department of Computer Science, University of Sheffield, Regent Court, Sheffield S1 4DP, UK.
| | - Pankaj Garg
- Department of Cardiology, Sheffield Teaching Hospitals NHS Trust, Sheffield S5 7AU, UK
| | - Tom Nelson
- Department of Cardiology, Sheffield Teaching Hospitals NHS Trust, Sheffield S5 7AU, UK
| | - Justin Lee
- Department of Cardiology, Sheffield Teaching Hospitals NHS Trust, Sheffield S5 7AU, UK
| | - Jim M Wild
- Insigneo Institute for In-Silico Medicine, University of Sheffield, Sheffield, UK; Polaris, Imaging Sciences, Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - Richard H Clayton
- Insigneo Institute for In-Silico Medicine, University of Sheffield, Sheffield, UK; Department of Computer Science, University of Sheffield, Regent Court, Sheffield S1 4DP, UK
| |
Collapse
|
17
|
Du X, Xu X, Liu H, Li S. TSU-net: Two-stage multi-scale cascade and multi-field fusion U-net for right ventricular segmentation. Comput Med Imaging Graph 2021; 93:101971. [PMID: 34482121 DOI: 10.1016/j.compmedimag.2021.101971] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 07/12/2021] [Accepted: 08/06/2021] [Indexed: 01/21/2023]
Abstract
Accurate segmentation of the right ventricle from cardiac magnetic resonance images (MRI) is a critical step in cardiac function analysis and disease diagnosis. It is still an open problem due to some difficulties, such as a large variety of object sizes and ill-defined borders. In this paper, we present a TSU-net network that grips deeper features and captures targets of different sizes with multi-scale cascade and multi-field fusion in the right ventricle. TSU-net mainly contains two major components: Dilated-Convolution Block (DB) and Multi-Layer-Pool Block (MB). DB extracts and aggregates multi-scale features for the right ventricle. MB mainly relies on multiple effective field-of-views to detect objects at different sizes and fill boundary features. Different from previous networks, we used DB and MB to replace the convolution layer in the encoding layer, thus, we can gather multi-scale information of right ventricle, detect different size targets and fill boundary information in each encoding layer. In addition, in the decoding layer, we used DB to replace the convolution layer, so that we can aggregate the multi-scale features of the right ventricle in each decoding layer. Furthermore, the two-stage U-net structure is used to further improve the utilization of DB and MB through a two-layer encoding/decoding layer. Our method is validated on the RVSC, a public right ventricular data set. The results demonstrated that TSU-net achieved an average Dice coefficient of 0.86 on endocardium and 0.90 on the epicardium, thereby outperforming other models. It effectively assists doctors to diagnose the disease and promotes the development of medical images. In addition, we also provide an intuitive explanation of our network, which fully explain MB and TSU-net's ability to detect targets of different sizes and fill in boundary features.
Collapse
Affiliation(s)
- Xiuquan Du
- Key Laboratory of Intelligent Computing and Signal Processing, Ministry of Education, Anhui University, Hefei, Anhui, China; School of Computer Science and Technology, Anhui University, Hefei, Anhui, China.
| | - Xiaofei Xu
- School of Computer Science and Technology, Anhui University, Hefei, Anhui, China
| | - Heng Liu
- Department of Gastroenterology, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China
| | - Shuo Li
- Department of Medical Imaging, Western University, London, ON, Canada
| |
Collapse
|
18
|
Tang Y, Zheng Y, Chen X, Wang W, Guo Q, Shu J, Wu J, Su S. Identifying Periampullary Regions in MRI Images Using Deep Learning. Front Oncol 2021; 11:674579. [PMID: 34123843 PMCID: PMC8193851 DOI: 10.3389/fonc.2021.674579] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 04/20/2021] [Indexed: 12/15/2022] Open
Abstract
Background Development and validation of a deep learning method to automatically segment the peri-ampullary (PA) region in magnetic resonance imaging (MRI) images. Methods A group of patients with or without periampullary carcinoma (PAC) was included. The PA regions were manually annotated in MRI images by experts. Patients were randomly divided into one training set, one validation set, and one test set. Deep learning methods were developed to automatically segment the PA region in MRI images. The segmentation performance of the methods was compared in the validation set. The model with the highest intersection over union (IoU) was evaluated in the test set. Results The deep learning algorithm achieved optimal accuracies in the segmentation of the PA regions in both T1 and T2 MRI images. The value of the IoU was 0.68, 0.68, and 0.64 for T1, T2, and combination of T1 and T2 images, respectively. Conclusions Deep learning algorithm is promising with accuracies of concordance with manual human assessment in segmentation of the PA region in MRI images. This automated non-invasive method helps clinicians to identify and locate the PA region using preoperative MRI scanning.
Collapse
Affiliation(s)
- Yong Tang
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Yingjun Zheng
- Department of General Surgery (Hepatobiliary Surgery), The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Xinpei Chen
- Department of Hepatobiliary Surgery, Deyang People's Hospital, Deyang, China
| | - Weijia Wang
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Qingxi Guo
- Department of Pathology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Jian Shu
- Department of Radiology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Jiali Wu
- Department of Anesthesiology, The Affiliated Hospital of Southwest Medical University, Luzhou, China
| | - Song Su
- Department of General Surgery (Hepatobiliary Surgery), The Affiliated Hospital of Southwest Medical University, Luzhou, China
| |
Collapse
|
19
|
Tang Y, Gao R, Lee HH, Han S, Chen Y, Gao D, Nath V, Bermudez C, Savona MR, Abramson RG, Bao S, Lyu I, Huo Y, Landman BA. High-resolution 3D abdominal segmentation with random patch network fusion. Med Image Anal 2021; 69:101894. [PMID: 33421919 PMCID: PMC9087814 DOI: 10.1016/j.media.2020.101894] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2020] [Revised: 11/04/2020] [Accepted: 11/05/2020] [Indexed: 02/07/2023]
Abstract
Deep learning for three dimensional (3D) abdominal organ segmentation on high-resolution computed tomography (CT) is a challenging topic, in part due to the limited memory provide by graphics processing units (GPU) and large number of parameters and in 3D fully convolutional networks (FCN). Two prevalent strategies, lower resolution with wider field of view and higher resolution with limited field of view, have been explored but have been presented with varying degrees of success. In this paper, we propose a novel patch-based network with random spatial initialization and statistical fusion on overlapping regions of interest (ROIs). We evaluate the proposed approach using three datasets consisting of 260 subjects with varying numbers of manual labels. Compared with the canonical "coarse-to-fine" baseline methods, the proposed method increases the performance on multi-organ segmentation from 0.799 to 0.856 in terms of mean DSC score (p-value < 0.01 with paired t-test). The effect of different numbers of patches is evaluated by increasing the depth of coverage (expected number of patches evaluated per voxel). In addition, our method outperforms other state-of-the-art methods in abdominal organ segmentation. In conclusion, the approach provides a memory-conservative framework to enable 3D segmentation on high-resolution CT. The approach is compatible with many base network structures, without substantially increasing the complexity during inference. Given a CT scan with at high resolution, a low-res section (left panel) is trained with multi-channel segmentation. The low-res part contains down-sampling and normalization in order to preserve the complete spatial information. Interpolation and random patch sampling (mid panel) is employed to collect patches. The high-dimensional probability maps are acquired (right panel) from integration of all patches on field of views.
Collapse
Affiliation(s)
- Yucheng Tang
- Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA.
| | - Riqiang Gao
- Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Ho Hin Lee
- Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | | | | | - Dashan Gao
- 12 Sigma Technologies, San Diego, CA 92130, USA
| | - Vishwesh Nath
- Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Camilo Bermudez
- Dept. of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Michael R Savona
- Radiology, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| | - Richard G Abramson
- Radiology, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| | - Shunxing Bao
- Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Ilwoo Lyu
- Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Yuankai Huo
- Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Bennett A Landman
- Dept. of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN 37235, USA; Dept. of Biomedical Engineering, Vanderbilt University, Nashville, TN 37235, USA; Radiology, Vanderbilt University Medical Center, Nashville, TN 37235, USA
| |
Collapse
|
20
|
Wang Y, Zhang J, Cui H, Zhang Y, Xia Y. View adaptive learning for pancreas segmentation. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102347] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
21
|
Ding Z, Niethammer M. VOTENET++: REGISTRATION REFINEMENT FOR MULTI-ATLAS SEGMENTATION. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2021; 2021:275-279. [PMID: 39247161 PMCID: PMC11378331 DOI: 10.1109/isbi48211.2021.9434031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/10/2024]
Abstract
Multi-atlas segmentation (MAS) is a popular image segmentation technique for medical images. In this work, we improve the performance of MAS by correcting registration errors before label fusion. Specifically, we use a volumetric displacement field to refine registrations based on image anatomical appearance and predicted labels. We show the influence of the initial spatial alignment as well as the beneficial effect of using label information for MAS performance. Experiments demonstrate that the proposed refinement approach improves MAS performance on a 3D magnetic resonance dataset of the knee.
Collapse
Affiliation(s)
- Zhipeng Ding
- Department of Computer Science, UNC Chapel Hill, USA
| | | |
Collapse
|
22
|
Shi X, Li C. Convexity preserving level set for left ventricle segmentation. Magn Reson Imaging 2021; 78:109-118. [PMID: 33592247 DOI: 10.1016/j.mri.2021.02.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 01/14/2021] [Accepted: 02/03/2021] [Indexed: 11/28/2022]
Abstract
In clinical applications of cardiac left ventricle (LV) segmentation, the segmented LV is desired to include the cavity, trabeculae, and papillary muscles, which form a convex shape. However, the intensities of trabeculae and papillary muscles are similar to myocardium. Consequently, segmentation algorithms may easily misclassify trabeculae and papillary muscles as myocardium. In this paper, we propose a level set method with a convexity preserving mechanism to ensure the convexity of the segmented LV. In the proposed level set method, the curvature of the level set contours is used to control their convexity, such that the level set contour is finally deformed as a convex shape. The experimental results and the comparison with other level set methods show the advantage of our method in terms of segmentation accuracy. Compared with the state-of-the-art methods using deep-learning, our method is able to achieve comparable segmentation accuracy without the need for training, while the deep-learning based method requires a large set of training data and high-quality manual segmentation. Therefore, our method can be conveniently used in situation where training data and their manual segmentation are not available.
Collapse
Affiliation(s)
- Xue Shi
- University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Chunming Li
- University of Electronic Science and Technology of China, Chengdu 611731, China.
| |
Collapse
|
23
|
Luo Y, Xu L, Qi L. A cascaded FC-DenseNet and level set method (FCDL) for fully automatic segmentation of the right ventricle in cardiac MRI. Med Biol Eng Comput 2021; 59:561-574. [PMID: 33559862 DOI: 10.1007/s11517-020-02305-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2019] [Accepted: 12/24/2020] [Indexed: 10/22/2022]
Abstract
Accurate segmentation of the right ventricle (RV) from cardiac magnetic resonance imaging (MRI) images is an essential step in estimating clinical indices such as stroke volume and ejection fraction. Recently, image segmentation methods based on fully convolutional neural networks (FCN) have drawn much attention and shown promising results. In this paper, a new fully automatic RV segmentation method combining the FC-DenseNet and the level set method (FCDL) is proposed. The FC-DenseNet is efficiently trained end-to-end, using RV images and ground truth masks to make a per-pixel semantic inference. As a result, probability images are produced, followed by the level set method responsible for smoothing and converging contours to improve accuracy. It is noted that the iteration times of the level set method is only 4 times, which is due to the semantic segmentation of the FC-DenseNet for RV. Finally, multi-object detection algorithm is applied to locate the RV. Experimental results (including 45 cases, 15 cases for training, 30 cases for testing) show that the FCDL method outperforms the U-net + level set (UL) and the level set methods that use the same dataset and the cardiac functional parameters are computed robustly by the FCDL method. The results validate the FCDL method as an efficient and satisfactory approach to RV segmentation.
Collapse
Affiliation(s)
- Yang Luo
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110016, China.,Anshan Normal University, Anshan, 114005, Liaoning, China
| | - Lisheng Xu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110016, China. .,Key Laboratory of Medical Image Computing, Ministry of Education, Shenyang, 110819, China. .,Neusoft Research of Intelligent Healthcare Technology, Co. Ltd., Shenyang, 110169, China.
| | - Lin Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, 110016, China
| |
Collapse
|
24
|
Bhattacharya S, Reddy Maddikunta PK, Pham QV, Gadekallu TR, Krishnan S SR, Chowdhary CL, Alazab M, Jalil Piran M. Deep learning and medical image processing for coronavirus (COVID-19) pandemic: A survey. SUSTAINABLE CITIES AND SOCIETY 2021; 65:102589. [PMID: 33169099 PMCID: PMC7642729 DOI: 10.1016/j.scs.2020.102589] [Citation(s) in RCA: 168] [Impact Index Per Article: 42.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
Since December 2019, the coronavirus disease (COVID-19) outbreak has caused many death cases and affected all sectors of human life. With gradual progression of time, COVID-19 was declared by the world health organization (WHO) as an outbreak, which has imposed a heavy burden on almost all countries, especially ones with weaker health systems and ones with slow responses. In the field of healthcare, deep learning has been implemented in many applications, e.g., diabetic retinopathy detection, lung nodule classification, fetal localization, and thyroid diagnosis. Numerous sources of medical images (e.g., X-ray, CT, and MRI) make deep learning a great technique to combat the COVID-19 outbreak. Motivated by this fact, a large number of research works have been proposed and developed for the initial months of 2020. In this paper, we first focus on summarizing the state-of-the-art research works related to deep learning applications for COVID-19 medical image processing. Then, we provide an overview of deep learning and its applications to healthcare found in the last decade. Next, three use cases in China, Korea, and Canada are also presented to show deep learning applications for COVID-19 medical image processing. Finally, we discuss several challenges and issues related to deep learning implementations for COVID-19 medical image processing, which are expected to drive further studies in controlling the outbreak and controlling the crisis, which results in smart healthy cities.
Collapse
Affiliation(s)
- Sweta Bhattacharya
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | | | - Quoc-Viet Pham
- Research Institute of Computer, Information and Communication, Pusan National University, Busan 46241, Republic of Korea
| | - Thippa Reddy Gadekallu
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Siva Rama Krishnan S
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Chiranji Lal Chowdhary
- School of Information Technology and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India
| | - Mamoun Alazab
- College of Engineering, IT & Environment, Charles Darwin University, Australia
| | - Md Jalil Piran
- Department of Computer Science and Engineering, Sejong University, 05006, Seoul, Republic of Korea
| |
Collapse
|
25
|
Grossiord E, Risser L, Kanoun S, Aziza R, Chiron H, Ysebaert L, Malgouyres F, Ken S. Semi-automatic segmentation of whole-body images in longitudinal studies. Biomed Phys Eng Express 2021; 7. [DOI: 10.1088/2057-1976/abce16] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Accepted: 11/26/2020] [Indexed: 11/12/2022]
Abstract
Abstract
We propose a semi-automatic segmentation pipeline designed for longitudinal studies considering structures with large anatomical variability, where expert interactions are required for relevant segmentations. Our pipeline builds on the regularized Fast Marching (rFM) segmentation approach by Risser et al (2018). It consists in transporting baseline multi-label FM seeds on follow-up images, selecting the relevant ones and finally performing the rFM approach. It showed increased, robust and faster results compared to clinical manual segmentation. Our method was evaluated on 3D synthetic images and patients’ whole-body MRI. It allowed a robust and flexible handling of organs longitudinal deformations while considerably reducing manual interventions.
Collapse
|
26
|
Lee M, Kim J, EY Kim R, Kim HG, Oh SW, Lee MK, Wang SM, Kim NY, Kang DW, Rieu Z, Yong JH, Kim D, Lim HK. Split-Attention U-Net: A Fully Convolutional Network for Robust Multi-Label Segmentation from Brain MRI. Brain Sci 2020; 10:E974. [PMID: 33322640 PMCID: PMC7764312 DOI: 10.3390/brainsci10120974] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 11/30/2020] [Accepted: 12/07/2020] [Indexed: 02/03/2023] Open
Abstract
Multi-label brain segmentation from brain magnetic resonance imaging (MRI) provides valuable structural information for most neurological analyses. Due to the complexity of the brain segmentation algorithm, it could delay the delivery of neuroimaging findings. Therefore, we introduce Split-Attention U-Net (SAU-Net), a convolutional neural network with skip pathways and a split-attention module that segments brain MRI scans. The proposed architecture employs split-attention blocks, skip pathways with pyramid levels, and evolving normalization layers. For efficient training, we performed pre-training and fine-tuning with the original and manually modified FreeSurfer labels, respectively. This learning strategy enables involvement of heterogeneous neuroimaging data in the training without the need for many manual annotations. Using nine evaluation datasets, we demonstrated that SAU-Net achieved better segmentation accuracy with better reliability that surpasses those of state-of-the-art methods. We believe that SAU-Net has excellent potential due to its robustness to neuroanatomical variability that would enable almost instantaneous access to accurate neuroimaging biomarkers and its swift processing runtime compared to other methods investigated.
Collapse
Affiliation(s)
- Minho Lee
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - JeeYoung Kim
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Regina EY Kim
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
- Institute of Human Genomic Study, College of Medicine, Korea University, Ansan 15355, Korea
- Department of Psychiatry, University of Iowa, Iowa City, IA 52242, USA
| | - Hyun Gi Kim
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Se Won Oh
- Department of Radiology, Eunpyeong St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 03312, Korea; (J.K.); (H.G.K.); (S.W.O.)
| | - Min Kyoung Lee
- Department of Radiology, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea;
| | - Sheng-Min Wang
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| | - Nak-Young Kim
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| | - Dong Woo Kang
- Department of Psychiatry, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 06591, Korea;
| | - ZunHyan Rieu
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Jung Hyun Yong
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Donghyeon Kim
- Research Institute, NEUROPHET Inc., Seoul 06247, Korea; (M.L.); (R.E.K.); (Z.R.); (J.H.Y.)
| | - Hyun Kook Lim
- Department of Psychiatry, Yeouido St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul 07345, Korea; (S.-M.W.); (N.-Y.K.)
| |
Collapse
|
27
|
Abstract
Segmentation of medical images using multiple atlases has recently gained immense attention due to their augmented robustness against variabilities across different subjects. These atlas-based methods typically comprise of three steps: atlas selection, image registration, and finally label fusion. Image registration is one of the core steps in this process, accuracy of which directly affects the final labeling performance. However, due to inter-subject anatomical variations, registration errors are inevitable. The aim of this paper is to develop a deep learning-based confidence estimation method to alleviate the potential effects of registration errors. We first propose a fully convolutional network (FCN) with residual connections to learn the relationship between the image patch pair (i.e., patches from the target subject and the atlas) and the related label confidence patch. With the obtained label confidence patch, we can identify the potential errors in the warped atlas labels and correct them. Then, we use two label fusion methods to fuse the corrected atlas labels. The proposed methods are validated on a publicly available dataset for hippocampus segmentation. Experimental results demonstrate that our proposed methods outperform the state-of-the-art segmentation methods.
Collapse
Affiliation(s)
- Hancan Zhu
- School of Mathematics Physics and Information, Shaoxing University, Shaoxing, 312000, China
| | - Ehsan Adeli
- Department of Psychiatry and Behavioral Sciences, Stanford University, Stanford, 94305, CA, USA
| | - Feng Shi
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, 27599, North Carolina, USA.
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
28
|
Habijan M, Babin D, Galić I, Leventić H, Romić K, Velicki L, Pižurica A. Overview of the Whole Heart and Heart Chamber Segmentation Methods. Cardiovasc Eng Technol 2020; 11:725-747. [DOI: 10.1007/s13239-020-00494-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Accepted: 10/06/2020] [Indexed: 12/13/2022]
|
29
|
Robust 2D Otsu's Algorithm for Uneven Illumination Image Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2020; 2020:5047976. [PMID: 32849864 PMCID: PMC7439172 DOI: 10.1155/2020/5047976] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Revised: 07/02/2020] [Accepted: 07/22/2020] [Indexed: 11/17/2022]
Abstract
Otsu's algorithm is one of the most well-known methods for automatic image thresholding. 2D Otsu's method is more robust compared to 1D Otsu's method. However, it still has limitations on salt-and-pepper noise corrupted images and uneven illumination images. To alleviate these limitations and improve the overall performance, here we propose an improved 2D Otsu's algorithm to increase the robustness to salt-and-pepper noise together with an adaptive energy based image partition technology for uneven illumination image segmentation. Based on the partition method, two schemes for automatic thresholding are adopted to find the best segmentation result. Experiments are conducted on both synthetic and real world uneven illumination images as well as real world regular illumination cell images. Original 2D Otsu's method, MAOTSU_2D, and two latest 1D Otsu's methods (Cao's method and DVE) are included for comparisons. Both qualitative and quantitative evaluations are introduced to verify the effectiveness of the proposed method. Results show that the proposed method is more robust to salt-and-pepper noise and acquires better segmentation results on uneven illumination images in general without compromising its performance on regular illumination images. For a test group of seven real world uneven illumination images, the proposed method could lower the ME value by 15% and increase the DSC value by 10%.
Collapse
|
30
|
Dynamically constructed network with error correction for accurate ventricle volume estimation. Med Image Anal 2020; 64:101723. [DOI: 10.1016/j.media.2020.101723] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2019] [Revised: 05/07/2020] [Accepted: 05/08/2020] [Indexed: 11/20/2022]
|
31
|
Wang X, Zhai S, Niu Y. Left ventricle landmark localization and identification in cardiac MRI by deep metric learning-assisted CNN regression. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2020.02.069] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
32
|
Sun L, Shao W, Zhang D, Liu M. Anatomical Attention Guided Deep Networks for ROI Segmentation of Brain MR Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2000-2012. [PMID: 31899417 DOI: 10.1109/tmi.2019.2962792] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Brain region-of-interest (ROI) segmentation based on structural magnetic resonance imaging (MRI) scans is an essential step for many computer-aid medical image analysis applications. Due to low intensity contrast around ROI boundary and large inter-subject variance, it has been remaining a challenging task to effectively segment brain ROIs from structural MR images. Even though several deep learning methods for brain MR image segmentation have been developed, most of them do not incorporate shape priors to take advantage of the regularity of brain structures, thus leading to sub-optimal performance. To address this issue, we propose an anatomical attention guided deep learning framework for brain ROI segmentation of structural MR images, containing two subnetworks. The first one is a segmentation subnetwork, used to simultaneously extract discriminative image representation and segment ROIs for each input MR image. The second one is an anatomical attention subnetwork, designed to capture the anatomical structure information of the brain from a set of labeled atlases. To utilize the anatomical attention knowledge learned from atlases, we develop an anatomical gate architecture to fuse feature maps derived from a set of atlas label maps and those from the to-be-segmented image for brain ROI segmentation. In this way, the anatomical prior learned from atlases can be explicitly employed to guide the segmentation process for performance improvement. Within this framework, we develop two anatomical attention guided segmentation models, denoted as anatomical gated fully convolutional network (AG-FCN) and anatomical gated U-Net (AG-UNet), respectively. Experimental results on both ADNI and LONI-LPBA40 datasets suggest that the proposed AG-FCN and AG-UNet methods achieve superior performance in ROI segmentation of brain MR images, compared with several state-of-the-art methods.
Collapse
|
33
|
Luo G, Dong S, Wang W, Wang K, Cao S, Tam C, Zhang H, Howey J, Ohorodnyk P, Li S. Commensal correlation network between segmentation and direct area estimation for bi-ventricle quantification. Med Image Anal 2020; 59:101591. [DOI: 10.1016/j.media.2019.101591] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2019] [Revised: 08/25/2019] [Accepted: 10/21/2019] [Indexed: 10/25/2022]
|
34
|
Sun L, Shao W, Wang M, Zhang D, Liu M. High-order Feature Learning for Multi-atlas based Label Fusion: Application to Brain Segmentation with MRI. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2702-2713. [PMID: 31725379 DOI: 10.1109/tip.2019.2952079] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Multi-atlas based segmentation methods have shown their effectiveness in brain regions-of-interesting (ROIs) segmentation, by propagating labels from multiple atlases to a target image based on the similarity between patches in the target image and multiple atlas images. Most of the existing multiatlas based methods use image intensity features to calculate the similarity between a pair of image patches for label fusion. In particular, using only low-level image intensity features cannot adequately characterize the complex appearance patterns (e.g., the high-order relationship between voxels within a patch) of brain magnetic resonance (MR) images. To address this issue, this paper develops a high-order feature learning framework for multi-atlas based label fusion, where high-order features of image patches are extracted and fused for segmenting ROIs of structural brain MR images. Specifically, an unsupervised feature learning method (i.e., means-covariances restricted Boltzmann machine, mcRBM) is employed to learn high-order features (i.e., mean and covariance features) of patches in brain MR images. Then, a group-fused sparsity dictionary learning method is proposed to jointly calculate the voting weights for label fusion, based on the learned high-order and the original image intensity features. The proposed method is compared with several state-of-the-art label fusion methods on ADNI, NIREP and LONI-LPBA40 datasets. The Dice ratio achieved by our method is 88:30%, 88:83%, 79:54% and 81:02% on left and right hippocampus on the ADNI, NIREP and LONI-LPBA40 datasets, respectively, while the best Dice ratio yielded by the other methods are 86:51%, 87:39%, 78:48% and 79:65% on three datasets, respectively.
Collapse
|
35
|
Lee MCH, Petersen K, Pawlowski N, Glocker B, Schaap M. TeTrIS: Template Transformer Networks for Image Segmentation With Shape Priors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2596-2606. [PMID: 30908196 DOI: 10.1109/tmi.2019.2905990] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
In this paper, we introduce and compare different approaches for incorporating shape prior information into neural network-based image segmentation. Specifically, we introduce the concept of template transformer networks, where a shape template is deformed to match the underlying structure of interest through an end-to-end trained spatial transformer network. This has the advantage of explicitly enforcing shape priors, and this is free of discretization artifacts by providing a soft partial volume segmentation. We also introduce a simple yet effective way of incorporating priors in the state-of-the-art pixel-wise binary classification methods such as fully convolutional networks and U-net. Here, the template shape is given as an additional input channel, incorporating this information significantly reduces false positives. We report results on synthetic data and sub-voxel segmentation of coronary lumen structures in cardiac computed tomography showing the benefit of incorporating priors in neural network-based image segmentation.
Collapse
|
36
|
Abstract
OBJECTIVE. The recent advancement of deep learning techniques has profoundly impacted research on quantitative cardiac MRI analysis. The purpose of this article is to introduce the concept of deep learning, review its current applications on quantitative cardiac MRI, and discuss its limitations and challenges. CONCLUSION. Deep learning has shown state-of-the-art performance on quantitative analysis of multiple cardiac MRI sequences and holds great promise for future use in clinical practice and scientific research.
Collapse
|
37
|
VoteNet: A Deep Learning Label Fusion Method for Multi-Atlas Segmentation. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2019; 11766:202-210. [PMID: 36108312 DOI: 10.1007/978-3-030-32248-9_23] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Deep learning (DL) approaches are state-of-the-art for many medical image segmentation tasks. They offer a number of advantages: they can be trained for specific tasks, computations are fast at test time, and segmentation quality is typically high. In contrast, previously popular multi-atlas segmentation (MAS) methods are relatively slow (as they rely on costly registrations) and even though sophisticated label fusion strategies have been proposed, DL approaches generally outperform MAS. In this work, we propose a DL-based label fusion strategy (VoteNet) which locally selects a set of reliable atlases whose labels are then fused via plurality voting. Experiments on 3D brain MRI data show that by selecting a good initial atlas set MAS with VoteNet significantly outperforms a number of other label fusion strategies as well as a direct DL segmentation approach. We also provide an experimental analysis of the upper performance bound achievable by our method. While unlikely achievable in practice, this bound suggests room for further performance improvements. Lastly, to address the runtime disadvantage of standard MAS, all our results make use of a fast DL registration approach.
Collapse
|
38
|
Duan J, Bello G, Schlemper J, Bai W, Dawes TJW, Biffi C, de Marvao A, Doumoud G, O'Regan DP, Rueckert D. Automatic 3D Bi-Ventricular Segmentation of Cardiac Images by a Shape-Refined Multi- Task Deep Learning Approach. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2151-2164. [PMID: 30676949 PMCID: PMC6728160 DOI: 10.1109/tmi.2019.2894322] [Citation(s) in RCA: 104] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/18/2023]
Abstract
Deep learning approaches have achieved state-of-the-art performance in cardiac magnetic resonance (CMR) image segmentation. However, most approaches have focused on learning image intensity features for segmentation, whereas the incorporation of anatomical shape priors has received less attention. In this paper, we combine a multi-task deep learning approach with atlas propagation to develop a shape-refined bi-ventricular segmentation pipeline for short-axis CMR volumetric images. The pipeline first employs a fully convolutional network (FCN) that learns segmentation and landmark localization tasks simultaneously. The architecture of the proposed FCN uses a 2.5D representation, thus combining the computational advantage of 2D FCNs networks and the capability of addressing 3D spatial consistency without compromising segmentation accuracy. Moreover, a refinement step is designed to explicitly impose shape prior knowledge and improve segmentation quality. This step is effective for overcoming image artifacts (e.g., due to different breath-hold positions and large slice thickness), which preclude the creation of anatomically meaningful 3D cardiac shapes. The pipeline is fully automated, due to network's ability to infer landmarks, which are then used downstream in the pipeline to initialize atlas propagation. We validate the pipeline on 1831 healthy subjects and 649 subjects with pulmonary hypertension. Extensive numerical experiments on the two datasets demonstrate that our proposed method is robust and capable of producing accurate, high-resolution, and anatomically smooth bi-ventricular 3D models, despite the presence of artifacts in input CMR volumes.
Collapse
|
39
|
Niu Y, Qin L, Wang X. Structured graph regularized shape prior and cross-entropy induced active contour model for myocardium segmentation in CTA images. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.04.052] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
40
|
Huo Y, Xu Z, Xiong Y, Aboud K, Parvathaneni P, Bao S, Bermudez C, Resnick SM, Cutting LE, Landman BA. 3D whole brain segmentation using spatially localized atlas network tiles. Neuroimage 2019; 194:105-119. [PMID: 30910724 PMCID: PMC6536356 DOI: 10.1016/j.neuroimage.2019.03.041] [Citation(s) in RCA: 168] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2018] [Revised: 02/23/2019] [Accepted: 03/19/2019] [Indexed: 01/18/2023] Open
Abstract
Detailed whole brain segmentation is an essential quantitative technique in medical image analysis, which provides a non-invasive way of measuring brain regions from a clinical acquired structural magnetic resonance imaging (MRI). Recently, deep convolution neural network (CNN) has been applied to whole brain segmentation. However, restricted by current GPU memory, 2D based methods, downsampling based 3D CNN methods, and patch-based high-resolution 3D CNN methods have been the de facto standard solutions. 3D patch-based high resolution methods typically yield superior performance among CNN approaches on detailed whole brain segmentation (>100 labels), however, whose performance are still commonly inferior compared with state-of-the-art multi-atlas segmentation methods (MAS) due to the following challenges: (1) a single network is typically used to learn both spatial and contextual information for the patches, (2) limited manually traced whole brain volumes are available (typically less than 50) for training a network. In this work, we propose the spatially localized atlas network tiles (SLANT) method to distribute multiple independent 3D fully convolutional networks (FCN) for high-resolution whole brain segmentation. To address the first challenge, multiple spatially distributed networks were used in the SLANT method, in which each network learned contextual information for a fixed spatial location. To address the second challenge, auxiliary labels on 5111 initially unlabeled scans were created by multi-atlas segmentation for training. Since the method integrated multiple traditional medical image processing methods with deep learning, we developed a containerized pipeline to deploy the end-to-end solution. From the results, the proposed method achieved superior performance compared with multi-atlas segmentation methods, while reducing the computational time from >30 h to 15 min. The method has been made available in open source (https://github.com/MASILab/SLANTbrainSeg).
Collapse
Affiliation(s)
- Yuankai Huo
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA.
| | - Zhoubing Xu
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Yunxi Xiong
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Katherine Aboud
- Department of Special Education, Vanderbilt University, Nashville, TN, USA
| | - Prasanna Parvathaneni
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Shunxing Bao
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Camilo Bermudez
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD, USA
| | - Laurie E Cutting
- Department of Special Education, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Pediatrics, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA
| | - Bennett A Landman
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA; Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA; Radiology and Radiological Sciences, Vanderbilt University, Nashville, TN, USA; Institute of Imaging Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
41
|
Automatic Labeling of MR Brain Images Through the Hashing Retrieval Based Atlas Forest. J Med Syst 2019; 43:241. [PMID: 31227923 DOI: 10.1007/s10916-019-1385-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2019] [Accepted: 06/10/2019] [Indexed: 10/26/2022]
Abstract
The multi-atlas method is one of the efficient and common automatic labeling method, which uses the prior information provided by expert-labeled images to guide the labeling of the target. However, most multi-atlas-based methods depend on the registration that may not give the correct information during the label propagation. To address the issue, we designed a new automatic labeling method through the hashing retrieval based atlas forest. The proposed method propagates labels without registration to reduce the errors, and constructs a target-oriented learning model to integrate information among the atlases. This method innovates a coarse classification strategy to preprocess the dataset, which retains the integrity of dataset and reduces computing time. Furthermore, the method considers each voxel in the atlas as a sample and encodes these samples with hashing for the fast sample retrieval. In the stage of labeling, the method selects suitable samples through hashing learning and trains atlas forests by integrating the information from the dataset. Then, the trained model is used to predict the labels of the target. Experimental results on two datasets illustrated that the proposed method is promising in the automatic labeling of MR brain images.
Collapse
|
42
|
Roja Ramani D, Ranjani SS. An Efficient Melanoma Diagnosis Approach Using Integrated HMF Multi-Atlas Map Based Segmentation. J Med Syst 2019; 43:225. [PMID: 31190229 DOI: 10.1007/s10916-019-1315-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2019] [Accepted: 04/25/2019] [Indexed: 10/26/2022]
Abstract
Melanoma is a life threading disease when it grows outside the corium layer of the skin. Mortality rates of the Melanoma cases are maximum among the skin cancer patients. The cost required for the treatment of advanced melanoma cases is very high and the survival rate is low. Numerous computerized dermoscopy systems are developed based on the combination of shape, texture and color features to facilitate early diagnosis of melanoma. The availability and cost of the dermoscopic imaging system is still an issue. To mitigate this issue, this paper presented an integrated segmentation and Third Dimensional (3D) feature extraction approach for the accurate diagnosis of melanoma. A multi-atlas method is applied for the image segmentation. The patch-based label fusion model is expressed in a Bayesian framework to improve the segmentation accuracy. A depth map is obtained from the Two-dimensional (2D) dermoscopic image for reconstructing the 3D skin lesion represented as structure tensors. The 3D shape features including the relative depth features are obtained. Streaks are the significant morphological terms of the melanoma in the radial growth phase. The proposed method yields maximum segmentation accuracy, sensibility, specificity and minimum cost function than the existing segmentation technique and classifier.
Collapse
Affiliation(s)
- D Roja Ramani
- Department of Information Technology, Sethu Institute of Technology, Virudhunagar, India.
| | - S Siva Ranjani
- Department of Computer Science and Engineering, Sethu Institute of Technology, Virudhunagar, India
| |
Collapse
|
43
|
Sun L, Zu C, Shao W, Guang J, Zhang D, Liu M. Reliability-based robust multi-atlas label fusion for brain MRI segmentation. Artif Intell Med 2019; 96:12-24. [DOI: 10.1016/j.artmed.2019.03.004] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2017] [Revised: 03/04/2019] [Accepted: 03/05/2019] [Indexed: 10/27/2022]
|
44
|
Alansary A, Oktay O, Li Y, Folgoc LL, Hou B, Vaillant G, Kamnitsas K, Vlontzos A, Glocker B, Kainz B, Rueckert D. Evaluating reinforcement learning agents for anatomical landmark detection. Med Image Anal 2019; 53:156-164. [PMID: 30784956 PMCID: PMC7610752 DOI: 10.1016/j.media.2019.02.007] [Citation(s) in RCA: 73] [Impact Index Per Article: 12.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 02/01/2019] [Accepted: 02/12/2019] [Indexed: 11/29/2022]
Abstract
Automatic detection of anatomical landmarks is an important step for a wide range of applications in medical image analysis. Manual annotation of landmarks is a tedious task and prone to observer errors. In this paper, we evaluate novel deep reinforcement learning (RL) strategies to train agents that can precisely and robustly localize target landmarks in medical scans. An artificial RL agent learns to identify the optimal path to the landmark by interacting with an environment, in our case 3D images. Furthermore, we investigate the use of fixed- and multi-scale search strategies with novel hierarchical action steps in a coarse-to-fine manner. Several deep Q-network (DQN) architectures are evaluated for detecting multiple landmarks using three different medical imaging datasets: fetal head ultrasound (US), adult brain and cardiac magnetic resonance imaging (MRI). The performance of our agents surpasses state-of-the-art supervised and RL methods. Our experiments also show that multi-scale search strategies perform significantly better than fixed-scale agents in images with large field of view and noisy background such as in cardiac MRI. Moreover, the novel hierarchical steps can significantly speed up the searching process by a factor of 4-5 times.
Collapse
Affiliation(s)
- Amir Alansary
- Biomedical Image Analysis Group (BioMedIA), Imperial College London, London, UK.
| | - Ozan Oktay
- Biomedical Image Analysis Group (BioMedIA), Imperial College London, London, UK
| | - Yuanwei Li
- Biomedical Image Analysis Group (BioMedIA), Imperial College London, London, UK
| | - Loic Le Folgoc
- Biomedical Image Analysis Group (BioMedIA), Imperial College London, London, UK
| | - Benjamin Hou
- Biomedical Image Analysis Group (BioMedIA), Imperial College London, London, UK
| | - Ghislain Vaillant
- Biomedical Image Analysis Group (BioMedIA), Imperial College London, London, UK
| | | | - Athanasios Vlontzos
- Biomedical Image Analysis Group (BioMedIA), Imperial College London, London, UK
| | - Ben Glocker
- Biomedical Image Analysis Group (BioMedIA), Imperial College London, London, UK
| | - Bernhard Kainz
- Biomedical Image Analysis Group (BioMedIA), Imperial College London, London, UK
| | - Daniel Rueckert
- Biomedical Image Analysis Group (BioMedIA), Imperial College London, London, UK
| |
Collapse
|
45
|
Li J, Yu ZL, Gu Z, Liu H, Li Y. Dilated-Inception Net: Multi-Scale Feature Aggregation for Cardiac Right Ventricle Segmentation. IEEE Trans Biomed Eng 2019; 66:3499-3508. [PMID: 30932820 DOI: 10.1109/tbme.2019.2906667] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Segmentation of cardiac ventricle from magnetic resonance images is significant for cardiac disease diagnosis, progression assessment, and monitoring cardiac conditions. Manual segmentation is so time consuming, tedious, and subjective that automated segmentation methods are highly desired in practice. However, conventional segmentation methods performed poorly in cardiac ventricle, especially in the right ventricle. Compared with the left ventricle, whose shape is a simple thick-walled circle, the structure of the right ventricle is more complex due to ambiguous boundary, irregular cavity, and variable crescent shape. Hence, effective feature extractors and segmentation models are preferred. In this paper, we propose a dilated-inception net (DIN) to extract and aggregate multi-scale features for right ventricle segmentation. The DIN outperforms many state-of-the-art models on the benchmark database of right ventricle segmentation challenge. In addition, the experimental results indicate that the proposed model has potential to reach expert-level performance in right ventricular epicardium segmentation. More importantly, DIN behaves similarly to clinical expert with high correlation coefficients in four clinical cardiac indices. Therefore, the proposed DIN is promising for automated cardiac right ventricle segmentation in clinical applications.
Collapse
|
46
|
Robinson R, Valindria VV, Bai W, Oktay O, Kainz B, Suzuki H, Sanghvi MM, Aung N, Paiva JM, Zemrak F, Fung K, Lukaschuk E, Lee AM, Carapella V, Kim YJ, Piechnik SK, Neubauer S, Petersen SE, Page C, Matthews PM, Rueckert D, Glocker B. Automated quality control in image segmentation: application to the UK Biobank cardiovascular magnetic resonance imaging study. J Cardiovasc Magn Reson 2019; 21:18. [PMID: 30866968 PMCID: PMC6416857 DOI: 10.1186/s12968-019-0523-x] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Accepted: 02/04/2019] [Indexed: 02/08/2023] Open
Abstract
BACKGROUND The trend towards large-scale studies including population imaging poses new challenges in terms of quality control (QC). This is a particular issue when automatic processing tools such as image segmentation methods are employed to derive quantitative measures or biomarkers for further analyses. Manual inspection and visual QC of each segmentation result is not feasible at large scale. However, it is important to be able to automatically detect when a segmentation method fails in order to avoid inclusion of wrong measurements into subsequent analyses which could otherwise lead to incorrect conclusions. METHODS To overcome this challenge, we explore an approach for predicting segmentation quality based on Reverse Classification Accuracy, which enables us to discriminate between successful and failed segmentations on a per-cases basis. We validate this approach on a new, large-scale manually-annotated set of 4800 cardiovascular magnetic resonance (CMR) scans. We then apply our method to a large cohort of 7250 CMR on which we have performed manual QC. RESULTS We report results used for predicting segmentation quality metrics including Dice Similarity Coefficient (DSC) and surface-distance measures. As initial validation, we present data for 400 scans demonstrating 99% accuracy for classifying low and high quality segmentations using the predicted DSC scores. As further validation we show high correlation between real and predicted scores and 95% classification accuracy on 4800 scans for which manual segmentations were available. We mimic real-world application of the method on 7250 CMR where we show good agreement between predicted quality metrics and manual visual QC scores. CONCLUSIONS We show that Reverse classification accuracy has the potential for accurate and fully automatic segmentation QC on a per-case basis in the context of large-scale population imaging as in the UK Biobank Imaging Study.
Collapse
Affiliation(s)
- Robert Robinson
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, Queen’s Gate, London, SW7 2AZ UK
| | - Vanya V. Valindria
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, Queen’s Gate, London, SW7 2AZ UK
| | - Wenjia Bai
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, Queen’s Gate, London, SW7 2AZ UK
| | - Ozan Oktay
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, Queen’s Gate, London, SW7 2AZ UK
| | - Bernhard Kainz
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, Queen’s Gate, London, SW7 2AZ UK
| | - Hideaki Suzuki
- Division of Brain Sciences, Dept. of Medicine, Imperial College London, Queen’s Gate, London, SW7 2AZ UK
| | - Mihir M. Sanghvi
- William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK
- Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE UK
| | - Nay Aung
- William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK
- Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE UK
| | - José Miguel Paiva
- William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK
| | - Filip Zemrak
- William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK
- Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE UK
| | - Kenneth Fung
- William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK
- Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE UK
| | - Elena Lukaschuk
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, OX3 9DU UK
| | - Aaron M. Lee
- William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK
- Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE UK
| | - Valentina Carapella
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, OX3 9DU UK
| | - Young Jin Kim
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, OX3 9DU UK
- Department of Radiology, Severance Hospital, Yonsei University College of Medicine, Seoul, South Korea
| | - Stefan K. Piechnik
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, OX3 9DU UK
| | - Stefan Neubauer
- Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, OX3 9DU UK
| | - Steffen E. Petersen
- William Harvey Research Institute, NIHR Barts Biomedical Research Centre, Queen Mary University of London, Charterhouse Square, London, EC1M 6BQ UK
- Barts Heart Centre, Barts Health NHS Trust, West Smithfield, London, EC1A 7BE UK
| | - Chris Page
- GlaxoSmithKline Research and Development, Stockley Park, Uxbridge, UB11 1BT UK
| | - Paul M. Matthews
- Division of Brain Sciences, Dept. of Medicine, Imperial College London, Queen’s Gate, London, SW7 2AZ UK
- UK Dementia Research Institute, Imperial College London, Queen’s Drive, London, SW7 2AZ UK
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, Queen’s Gate, London, SW7 2AZ UK
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, Queen’s Gate, London, SW7 2AZ UK
| |
Collapse
|
47
|
Torrado-Carvajal A, Eryaman Y, Turk EA, Herraiz JL, Hernandez-Tamames JA, Adalsteinsson E, Wald LL, Malpica N. Computer-Vision Techniques for Water-Fat Separation in Ultra High-Field MRI Local Specific Absorption Rate Estimation. IEEE Trans Biomed Eng 2019; 66:768-774. [PMID: 30010546 DOI: 10.1109/tbme.2018.2856501] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
OBJECTIVE The purpose of this paper is to prove that computer-vision techniques allow synthesizing water-fat separation maps for local specific absorption rate (SAR) estimation, when patient-specific water-fat images are not available. METHODS We obtained ground truth head models by using patient-specific water-fat images. We obtained two different label-fusion water-fat models generating a water-fat multiatlas and applying the STAPLE and local-MAP-STAPLE label-fusion methods. We also obtained patch-based water-fat models applying a local group-wise weighted combination of the multiatlas. Electromagnetic (EM) simulations were performed, and B1+ magnitude and 10 g averaged SAR maps were generated. RESULTS We found local approaches provide a high DICE overlap (72.6 ± 10.2% fat and 91.6 ± 1.5% water in local-MAP-STAPLE, and 68.8 ± 8.2% fat and 91.1 ± 1.0% water in patch-based), low Hausdorff distances (18.6 ± 7.7 mm fat and 7.4 ± 11.2 mm water in local-MAP-STAPLE, and 16.4 ± 8.5 mm fat and 7.2 ± 11.8 mm water in patch-based) and a low error in volume estimation (15.6 ± 34.4% fat and 5.6 ± 4.1% water in the local-MAP-STAPLE, and 14.0 ± 17.7% fat and 4.7 ± 2.8% water in patch-based). The positions of the peak 10 g-averaged local SAR hotspots were the same for every model. CONCLUSION We have created patient-specific head models using three different computer-vision-based water-fat separation approaches and compared the predictions of B1+ field and SAR distributions generated by simulating these models. Our results prove that a computer-vision approach can be used for patient-specific water-fat separation, and utilized for local SAR estimation in high-field MRI. SIGNIFICANCE Computer-vision approaches can be used for patient-specific water-fat separation and for patient specific local SAR estimation, when water-fat images of the patient are not available.
Collapse
|
48
|
Wang Y, Zhang Y, Xuan W, Kao E, Cao P, Tian B, Ordovas K, Saloner D, Liu J. Fully automatic segmentation of 4D MRI for cardiac functional measurements. Med Phys 2018; 46:180-189. [PMID: 30352129 DOI: 10.1002/mp.13245] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Revised: 09/10/2018] [Accepted: 09/12/2018] [Indexed: 11/05/2022] Open
Abstract
PURPOSE Segmentation of cardiac medical images, an important step in measuring cardiac function, is usually performed either manually or semiautomatically. Fully automatic segmentation of the left ventricle (LV), the right ventricle (RV) as well as the myocardium of three-dimensional (3D) magnetic resonance (MR) images throughout the entire cardiac cycle (four-dimensional, 4D), remains challenging. This study proposes a deformable-based segmentation methodology for efficiently segmenting 4D (3D + t) cardiac MR images. METHODS The proposed methodology first used the Hough transform and the local Gaussian distribution method (LGD) to segment the LV endocardial contours from cardiac MR images. Following this, a novel level set-based shape prior method was applied to generate the LV epicardial contours and the RV boundary. RESULTS This automatic image segmentation approach has been applied to studies on 17 subjects. The results demonstrated that the proposed method was efficient compared to manual segmentation, achieving a segmentation accuracy with average Dice values of 88.62 ± 5.47%, 87.35 ± 7.26%, and 82.63 ± 6.22% for the LV endocardial, LV epicardial, and RV contours, respectively. CONCLUSIONS We have presented a method for accurate LV and RV segmentation. Compared to three existing methods, the proposed method can successfully segment the LV and yield the highest Dice value. This makes it an option for clinical assessment of the volume, size, and thickness of the ventricles.
Collapse
Affiliation(s)
- Yan Wang
- Department of Radiology, University of California San Francisco, San Francisco, CA, 94121, USA
| | - Yue Zhang
- Department of Surgery, University of California San Francisco, San Francisco, CA, 94121, USA.,Veteran Affairs Medical Center, San Francisco, CA, 94121, USA
| | - Wanling Xuan
- The Ohio State University Wexner Medical Center, Columbus, Ohio, 43210, USA
| | - Evan Kao
- Department of Radiology, University of California San Francisco, San Francisco, CA, 94121, USA.,University of California Berkeley, Berkeley, CA, 94720, USA
| | - Peng Cao
- Department of Radiology, University of California San Francisco, San Francisco, CA, 94107, USA
| | - Bing Tian
- Department of Radiology, Changhai Hospital, Shanghai, 200433, China
| | - Karen Ordovas
- Department of Radiology, University of California San Francisco, San Francisco, CA, 94121, USA
| | - David Saloner
- Department of Radiology, University of California San Francisco, San Francisco, CA, 94121, USA.,Department of Surgery, University of California San Francisco, San Francisco, CA, 94121, USA
| | - Jing Liu
- Department of Radiology, University of California San Francisco, San Francisco, CA, 94108, USA
| |
Collapse
|
49
|
Dawes TJW, de Marvao A, Shi W, Rueckert D, Cook SA, O'Regan DP. Identifying the optimal regional predictor of right ventricular global function: a high-resolution three-dimensional cardiac magnetic resonance study. Anaesthesia 2018; 74:312-320. [PMID: 30427059 PMCID: PMC6767156 DOI: 10.1111/anae.14494] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/05/2018] [Indexed: 12/17/2022]
Abstract
Right ventricular (RV) function has prognostic value in acute, chronic and peri‐operative disease, although the complex RV contractile pattern makes rapid assessment difficult. Several two‐dimensional (2D) regional measures estimate RV function, however the optimal measure is not known. High‐resolution three‐dimensional (3D) cardiac magnetic resonance cine imaging was acquired in 300 healthy volunteers and a computational model of RV motion created. Points where regional function was significantly associated with global function were identified and a 2D, optimised single‐point marker (SPM‐O) of global function developed. This marker was prospectively compared with tricuspid annular plane systolic excursion (TAPSE), septum‐freewall displacement (SFD) and their fractional change (TAPSE‐F, SFD‐F) in a test cohort of 300 patients in the prediction of RV ejection fraction. RV ejection fraction was significantly associated with systolic function in a contiguous 7.3 cm2 patch of the basal RV freewall combining transverse (38%), longitudinal (35%) and circumferential (27%) contraction and coinciding with the four‐chamber view. In the test cohort, all single‐point surrogates correlated with RV ejection fraction (p < 0.010), but correlation (R) was higher for SPM‐O (R = 0.44, p < 0.001) than TAPSE (R = 0.24, p < 0.001) and SFD (R = 0.22, p < 0.001), and non‐significantly higher than TAPSE‐F (R = 0.40, p < 0.001) and SFD‐F (R = 0.43, p < 0.001). SPM‐O explained more of the observed variance in RV ejection fraction (19%) and predicted it more accurately than any other 2D marker (median error 2.8 ml vs 3.6 ml, p < 0.001). We conclude that systolic motion of the basal RV freewall predicts global function more accurately than other 2D estimators. However, no markers summarise 3D contractile patterns, limiting their predictive accuracy.
Collapse
Affiliation(s)
- T J W Dawes
- National Heart and Lung Institute, Imperial College London, London, UK
| | - A de Marvao
- Medical Research Council London Institute of Medical Sciences, Faculty of Medicine, Imperial College London, London, UK
| | - W Shi
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
| | - D Rueckert
- Department of Computing, Faculty of Engineering, Imperial College London, London, UK
| | - S A Cook
- Department of Clinical and Molecular Cardiology, Medical Research Council London Institute of Medical Sciences, Faculty of Medicine, Imperial College London, London, UK.,Department of Cardiology, National Heart Centre Singapore, Singapore and Duke-NUS Graduate Medical School, Singapore
| | - D P O'Regan
- Medical Research Council London Institute of Medical Sciences, Faculty of Medicine, Imperial College London, London, UK
| |
Collapse
|
50
|
An atlas-based multimodal registration method for 2D images with discrepancy structures. Med Biol Eng Comput 2018; 56:2151-2161. [PMID: 29862470 DOI: 10.1007/s11517-018-1808-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2017] [Accepted: 02/16/2018] [Indexed: 10/14/2022]
Abstract
An atlas-based multimodal registration method for 2-dimension images with discrepancy structures was proposed in this paper. Atlas was utilized for complementing the discrepancy structure information in multimodal medical images. The scheme includes three steps: floating image to atlas registration, atlas to reference image registration, and field-based deformation. To evaluate the performance, a frame model, a brain model, and clinical images were employed in registration experiments. We measured the registration performance by the squared sum of intensity differences. Results indicate that this method is robust and performs better than the direct registration for multimodal images with discrepancy structures. We conclude that the proposed method is suitable for multimodal images with discrepancy structures. Graphical Abstract An Atlas-based multimodal registration method schematic diagram.
Collapse
|