1
|
Torres HR, Oliveira B, Fritze A, Birdir C, Rudiger M, Fonseca JC, Morais P, Vilaca JL. Deep-DM: Deep-Driven Deformable Model for 3D Image Segmentation Using Limited Data. IEEE J Biomed Health Inform 2024; 28:7287-7299. [PMID: 39110559 DOI: 10.1109/jbhi.2024.3440171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Objective - Medical image segmentation is essential for several clinical tasks, including diagnosis, surgical and treatment planning, and image-guided interventions. Deep Learning (DL) methods have become the state-of-the-art for several image segmentation scenarios. However, a large and well-annotated dataset is required to effectively train a DL model, which is usually difficult to obtain in clinical practice, especially for 3D images. Methods - In this paper, we proposed Deep-DM, a learning-guided deformable model framework for 3D medical imaging segmentation using limited training data. In the proposed method, an energy function is learned by a Convolutional Neural Network (CNN) and integrated into an explicit deformable model to drive the evolution of an initial surface towards the object to segment. Specifically, the learning-based energy function is iteratively retrieved from localized anatomical representations of the image containing the image information around the evolving surface at each iteration. By focusing on localized regions of interest, this representation excludes irrelevant image information, facilitating the learning process. Results and conclusion - The performance of the proposed method is demonstrated for the tasks of left ventricle and fetal head segmentation in ultrasound, left atrium segmentation in Magnetic Resonance, and bladder segmentation in Computed Tomography, using different numbers of training volumes in each study. The results obtained showed the feasibility of the proposed method to segment different anatomical structures in different imaging modalities. Moreover, the results also showed that the proposed approach is less dependent on the size of the training dataset in comparison with state-of-the-art DL-based segmentation methods, outperforming them for all tasks when a low number of samples is available. Significance - Overall, by offering a more robust and less data-intensive approach to accurately segmenting anatomical structures, the proposed method has the potential to enhance clinical tasks that require image segmentation strategies.
Collapse
|
2
|
Akbari S, Tabassian M, Pedrosa J, Queiros S, Papangelopoulou K, D'hooge J. BEAS-Net: A Shape-Prior-Based Deep Convolutional Neural Network for Robust Left Ventricular Segmentation in 2-D Echocardiography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:1565-1576. [PMID: 38913532 DOI: 10.1109/tuffc.2024.3418030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/26/2024]
Abstract
Left ventricle (LV) segmentation of 2-D echocardiography images is an essential step in the analysis of cardiac morphology and function and-more generally-diagnosis of cardiovascular diseases (CVD). Several deep learning (DL) algorithms have recently been proposed for the automatic segmentation of the LV, showing significant performance improvement over the traditional segmentation algorithms. However, unlike the traditional methods, prior information about the segmentation problem, e.g., anatomical shape information, is not usually incorporated for training the DL algorithms. This can degrade the generalization performance of the DL models on unseen images if their characteristics are somewhat different from those of the training images, e.g., low-quality testing images. In this study, a new shape-constrained deep convolutional neural network (CNN)-called B-spline explicit active surface (BEAS)-Net-is introduced for automatic LV segmentation. The BEAS-Net learns how to associate the image features, encoded by its convolutional layers, with anatomical shape-prior information derived by the BEAS algorithm to generate physiologically meaningful segmentation contours when dealing with artifactual or low-quality images. The performance of the proposed network was evaluated using three different in vivo datasets and was compared with a deep segmentation algorithm based on the U-Net model. Both the networks yielded comparable results when tested on images of acceptable quality, but the BEAS-Net outperformed the benchmark DL model on artifactual and low-quality images.
Collapse
|
3
|
Wang G, Zhou M, Ning X, Tiwari P, Zhu H, Yang G, Yap CH. US2Mask: Image-to-mask generation learning via a conditional GAN for cardiac ultrasound image segmentation. Comput Biol Med 2024; 172:108282. [PMID: 38503085 DOI: 10.1016/j.compbiomed.2024.108282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/29/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
Cardiac ultrasound (US) image segmentation is vital for evaluating clinical indices, but it often demands a large dataset and expert annotations, resulting in high costs for deep learning algorithms. To address this, our study presents a framework utilizing artificial intelligence generation technology to produce multi-class RGB masks for cardiac US image segmentation. The proposed approach directly performs semantic segmentation of the heart's main structures in US images from various scanning modes. Additionally, we introduce a novel learning approach based on conditional generative adversarial networks (CGAN) for cardiac US image segmentation, incorporating a conditional input and paired RGB masks. Experimental results from three cardiac US image datasets with diverse scan modes demonstrate that our approach outperforms several state-of-the-art models, showcasing improvements in five commonly used segmentation metrics, with lower noise sensitivity. Source code is available at https://github.com/energy588/US2mask.
Collapse
Affiliation(s)
- Gang Wang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, Chongqing; Department of Bioengineering, Imperial College London, London, UK
| | - Mingliang Zhou
- School of Computer Science, Chongqing University, Chongqing, Chongqing.
| | - Xin Ning
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Prayag Tiwari
- School of Information Technology, Halmstad University, Halmstad, Sweden
| | | | - Guang Yang
- Department of Bioengineering, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
4
|
Liao M, Lian Y, Yao Y, Chen L, Gao F, Xu L, Huang X, Feng X, Guo S. Left Ventricle Segmentation in Echocardiography with Transformer. Diagnostics (Basel) 2023; 13:2365. [PMID: 37510109 PMCID: PMC10378102 DOI: 10.3390/diagnostics13142365] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 07/02/2023] [Accepted: 07/08/2023] [Indexed: 07/30/2023] Open
Abstract
Left ventricular ejection fraction (LVEF) plays as an essential role in the assessment of cardiac function, providing quantitative data support for the medical diagnosis of heart disease. Robust evaluation of the ejection fraction relies on accurate left ventricular (LV) segmentation of echocardiograms. Because human bias and expensive labor cost exist in manual echocardiographic analysis, computer algorithms of deep-learning have been developed to help human experts in segmentation tasks. Most of the previous work is based on the convolutional neural networks (CNN) structure and has achieved good results. However, the region occupied by the left ventricle is large for echocardiography. Therefore, the limited receptive field of CNN leaves much room for improvement in the effectiveness of LV segmentation. In recent years, Vision Transformer models have demonstrated their effectiveness and universality in traditional semantic segmentation tasks. Inspired by this, we propose two models that use two different pure Transformers as the basic framework for LV segmentation in echocardiography: one combines Swin Transformer and K-Net, and the other uses Segformer. We evaluate these two models on the EchoNet-Dynamic dataset of LV segmentation and compare the quantitative metrics with other models for LV segmentation. The experimental results show that the mean Dice similarity of the two models scores are 92.92% and 92.79%, respectively, which outperform most of the previous mainstream CNN models. In addition, we found that for some samples that were not easily segmented, whereas both our models successfully recognized the valve region and separated left ventricle and left atrium, the CNN model segmented them together as a single part. Therefore, it becomes possible for us to obtain accurate segmentation results through simple post-processing, by filtering out the parts with the largest circumference or pixel square. These promising results prove the effectiveness of the two models and reveal the potential of Transformer structure in echocardiographic segmentation.
Collapse
Affiliation(s)
- Minqi Liao
- Department of Cardiology, Dongguan People's Hospital (The Tenth Affiliated Hospital of Southern Medical Univerity), No 78, Wandao Road, Wanjiang District, Dongguan 523059, China
| | - Yifan Lian
- National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China
| | - Yongzhao Yao
- Department of Cardiology, Dongguan People's Hospital (The Tenth Affiliated Hospital of Southern Medical Univerity), No 78, Wandao Road, Wanjiang District, Dongguan 523059, China
| | - Lihua Chen
- Department of Cardiology, Dongguan People's Hospital (The Tenth Affiliated Hospital of Southern Medical Univerity), No 78, Wandao Road, Wanjiang District, Dongguan 523059, China
| | - Fei Gao
- National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Long Xu
- National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China
- Peng Cheng National Laboratory, Shenzhen 518000, China
| | - Xin Huang
- National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China
| | - Xinxing Feng
- Endocrinology Centre, Fuwai Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100037, China
| | - Suxia Guo
- Department of Cardiology, Dongguan People's Hospital (The Tenth Affiliated Hospital of Southern Medical Univerity), No 78, Wandao Road, Wanjiang District, Dongguan 523059, China
| |
Collapse
|
5
|
Torres HR, Oliveira B, Fonseca JC, Morais P, Vilaca JL. Dual consistency loss for contour-aware segmentation in medical images. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082637 DOI: 10.1109/embc40787.2023.10340931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Medical image segmentation is a paramount task for several clinical applications, namely for the diagnosis of pathologies, for treatment planning, and for aiding image-guided surgeries. With the development of deep learning, Convolutional Neural Networks (CNN) have become the state-of-the-art for medical image segmentation. However, issues are still raised concerning the precise object boundary delineation, since traditional CNNs can produce non-smooth segmentations with boundary discontinuities. In this work, a U-shaped CNN architecture is proposed to generate both pixel-wise segmentation and probabilistic contour maps of the object to segment, in order to generate reliable segmentations at the object's boundaries. Moreover, since the segmentation and contour maps must be inherently related to each other, a dual consistency loss that relates the two outputs of the network is proposed. Thus, the network is enforced to consistently learn the segmentation and contour delineation tasks during the training. The proposed method was applied and validated on a public dataset of cardiac 3D ultrasound images of the left ventricle. The results obtained showed the good performance of the method and its applicability for the cardiac dataset, showing its potential to be used in clinical practice for medical image segmentation.Clinical Relevance- The proposed network with dual consistency loss scheme can improve the performance of state-of-the-art CNNs for medical image segmentation, proving its value to be applied for computer-aided diagnosis.
Collapse
|
6
|
Messaoudi H, Belaid A, Ben Salem D, Conze PH. Cross-dimensional transfer learning in medical image segmentation with deep learning. Med Image Anal 2023; 88:102868. [PMID: 37384952 DOI: 10.1016/j.media.2023.102868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 06/06/2023] [Accepted: 06/08/2023] [Indexed: 07/01/2023]
Abstract
Over the last decade, convolutional neural networks have emerged and advanced the state-of-the-art in various image analysis and computer vision applications. The performance of 2D image classification networks is constantly improving and being trained on databases made of millions of natural images. Conversely, in the field of medical image analysis, the progress is also remarkable but has mainly slowed down due to the relative lack of annotated data and besides, the inherent constraints related to the acquisition process. These limitations are even more pronounced given the volumetry of medical imaging data. In this paper, we introduce an efficient way to transfer the efficiency of a 2D classification network trained on natural images to 2D, 3D uni- and multi-modal medical image segmentation applications. In this direction, we designed novel architectures based on two key principles: weight transfer by embedding a 2D pre-trained encoder into a higher dimensional U-Net, and dimensional transfer by expanding a 2D segmentation network into a higher dimension one. The proposed networks were tested on benchmarks comprising different modalities: MR, CT, and ultrasound images. Our 2D network ranked first on the CAMUS challenge dedicated to echo-cardiographic data segmentation and surpassed the state-of-the-art. Regarding 2D/3D MR and CT abdominal images from the CHAOS challenge, our approach largely outperformed the other 2D-based methods described in the challenge paper on Dice, RAVD, ASSD, and MSSD scores and ranked third on the online evaluation platform. Our 3D network applied to the BraTS 2022 competition also achieved promising results, reaching an average Dice score of 91.69% (91.22%) for the whole tumor, 83.23% (84.77%) for the tumor core and 81.75% (83.88%) for enhanced tumor using the approach based on weight (dimensional) transfer. Experimental and qualitative results illustrate the effectiveness of our methods for multi-dimensional medical image segmentation.
Collapse
Affiliation(s)
- Hicham Messaoudi
- Laboratory of Medical Informatics (LIMED), Faculty of Technology, University of Bejaia, 06000 Bejaia, Algeria.
| | - Ahror Belaid
- Laboratory of Medical Informatics (LIMED), Faculty of Exact Sciences, University of Bejaia, 06000 Bejaia, Algeria; Data Science & Applications Research Unit - CERIST, 06000, Bejaia, Algeria
| | - Douraied Ben Salem
- Laboratory of Medical Information Processing (LaTIM) UMR 1101, Inserm, 29200, Brest, France; Neuroradiology Department, University Hospital of Brest, 29200, Brest, France
| | - Pierre-Henri Conze
- Laboratory of Medical Information Processing (LaTIM) UMR 1101, Inserm, 29200, Brest, France; IMT Atlantique, 29200, Brest, France
| |
Collapse
|
7
|
Zhao D, Ferdian E, Maso Talou GD, Quill GM, Gilbert K, Wang VY, Babarenda Gamage TP, Pedrosa J, D’hooge J, Sutton TM, Lowe BS, Legget ME, Ruygrok PN, Doughty RN, Camara O, Young AA, Nash MP. MITEA: A dataset for machine learning segmentation of the left ventricle in 3D echocardiography using subject-specific labels from cardiac magnetic resonance imaging. Front Cardiovasc Med 2023; 9:1016703. [PMID: 36704465 PMCID: PMC9871929 DOI: 10.3389/fcvm.2022.1016703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 12/06/2022] [Indexed: 01/11/2023] Open
Abstract
Segmentation of the left ventricle (LV) in echocardiography is an important task for the quantification of volume and mass in heart disease. Continuing advances in echocardiography have extended imaging capabilities into the 3D domain, subsequently overcoming the geometric assumptions associated with conventional 2D acquisitions. Nevertheless, the analysis of 3D echocardiography (3DE) poses several challenges associated with limited spatial resolution, poor contrast-to-noise ratio, complex noise characteristics, and image anisotropy. To develop automated methods for 3DE analysis, a sufficiently large, labeled dataset is typically required. However, ground truth segmentations have historically been difficult to obtain due to the high inter-observer variability associated with manual analysis. We address this lack of expert consensus by registering labels derived from higher-resolution subject-specific cardiac magnetic resonance (CMR) images, producing 536 annotated 3DE images from 143 human subjects (10 of which were excluded). This heterogeneous population consists of healthy controls and patients with cardiac disease, across a range of demographics. To demonstrate the utility of such a dataset, a state-of-the-art, self-configuring deep learning network for semantic segmentation was employed for automated 3DE analysis. Using the proposed dataset for training, the network produced measurement biases of -9 ± 16 ml, -1 ± 10 ml, -2 ± 5 %, and 5 ± 23 g, for end-diastolic volume, end-systolic volume, ejection fraction, and mass, respectively, outperforming an expert human observer in terms of accuracy as well as scan-rescan reproducibility. As part of the Cardiac Atlas Project, we present here a large, publicly available 3DE dataset with ground truth labels that leverage the higher resolution and contrast of CMR, to provide a new benchmark for automated 3DE analysis. Such an approach not only reduces the effect of observer-specific bias present in manual 3DE annotations, but also enables the development of analysis techniques which exhibit better agreement with CMR compared to conventional methods. This represents an important step for enabling more efficient and accurate diagnostic and prognostic information to be obtained from echocardiography.
Collapse
Affiliation(s)
- Debbie Zhao
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Edward Ferdian
- Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand
| | | | - Gina M. Quill
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Kathleen Gilbert
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Vicky Y. Wang
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | | | - João Pedrosa
- Institute for Systems and Computer Engineering, Technology and Science (INESC TEC), Porto, Portugal
| | - Jan D’hooge
- Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
| | - Timothy M. Sutton
- Counties Manukau Health Cardiology, Middlemore Hospital, Auckland, New Zealand
| | - Boris S. Lowe
- Green Lane Cardiovascular Service, Auckland City Hospital, Auckland, New Zealand
| | - Malcolm E. Legget
- Department of Medicine, University of Auckland, Auckland, New Zealand
| | - Peter N. Ruygrok
- Green Lane Cardiovascular Service, Auckland City Hospital, Auckland, New Zealand
- Department of Medicine, University of Auckland, Auckland, New Zealand
| | - Robert N. Doughty
- Green Lane Cardiovascular Service, Auckland City Hospital, Auckland, New Zealand
- Department of Medicine, University of Auckland, Auckland, New Zealand
| | - Oscar Camara
- Department of Information and Communication Technologies, Universitat Pompeu Fabra, Barcelona, Spain
| | - Alistair A. Young
- Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand
- Department of Biomedical Engineering, King’s College London, London, United Kingdom
| | - Martyn P. Nash
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
- Department of Engineering Science, University of Auckland, Auckland, New Zealand
| |
Collapse
|
8
|
Tian Y, Su D, Lauria S, Liu X. Recent advances on loss functions in deep learning for computer vision. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.127] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
|
9
|
Abdallah Y. Detection of Cardiac Tissues using K-means Analysis Methods in Nuclear Medicine Images. Open Access Maced J Med Sci 2021. [DOI: 10.3889/oamjms.2021.7806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
BACKGROUND: Nuclear cardiology uses to diagnose the cardiac disorders such as ischemic and inflammation disorders. In cardiac scintigraphy, unraveling closely adjacent tissues in the image are challenging issue.
AIM: The aim of the study is to detect of cardiac tissues using K-means analysis methods in nuclear medicine images. This study also aimed to reduce the existence of fleck noise that disturbs the contrast and make its analysis more difficult.
METHODS: Thus, digital image processing uses to increase the detection rate of myocardium easily using its color-based algorithms. In this study, color-based K-means was used. The scintographs were converted into color space presentation. Then, each pixel in the image was segmented using color analysis algorithms.
RESULTS: The segmented scintograph was displayed in distinct fresh image. The proposed technique defines the myocardial tissues and borders precisely. Both exactness rate and recall reckoning were calculated. The results were 97.3 + 8.46 (p > 0.05).
CONCLUSION: The proposed technique offered recognition of the heart tissue with high exactness amount.
Collapse
|
10
|
Wu M, Awasthi N, Rad NM, Pluim JPW, Lopata RGP. Advanced Ultrasound and Photoacoustic Imaging in Cardiology. SENSORS (BASEL, SWITZERLAND) 2021; 21:7947. [PMID: 34883951 PMCID: PMC8659598 DOI: 10.3390/s21237947] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/30/2021] [Revised: 11/23/2021] [Accepted: 11/26/2021] [Indexed: 12/26/2022]
Abstract
Cardiovascular diseases (CVDs) remain the leading cause of death worldwide. An effective management and treatment of CVDs highly relies on accurate diagnosis of the disease. As the most common imaging technique for clinical diagnosis of the CVDs, US imaging has been intensively explored. Especially with the introduction of deep learning (DL) techniques, US imaging has advanced tremendously in recent years. Photoacoustic imaging (PAI) is one of the most promising new imaging methods in addition to the existing clinical imaging methods. It can characterize different tissue compositions based on optical absorption contrast and thus can assess the functionality of the tissue. This paper reviews some major technological developments in both US (combined with deep learning techniques) and PA imaging in the application of diagnosis of CVDs.
Collapse
Affiliation(s)
- Min Wu
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
| | - Navchetan Awasthi
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Nastaran Mohammadian Rad
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Josien P. W. Pluim
- Medical Image Analysis Group (IMAG/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands;
| | - Richard G. P. Lopata
- Photoacoustics and Ultrasound Laboratory Eindhoven (PULS/e), Department of Biomedical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands; (N.M.R.); (R.G.P.L.)
| |
Collapse
|
11
|
Mitral Valve Segmentation Using Robust Nonnegative Matrix Factorization. J Imaging 2021; 7:jimaging7100213. [PMID: 34677299 PMCID: PMC8541511 DOI: 10.3390/jimaging7100213] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/05/2021] [Accepted: 10/08/2021] [Indexed: 11/16/2022] Open
Abstract
Analyzing and understanding the movement of the mitral valve is of vital importance in cardiology, as the treatment and prevention of several serious heart diseases depend on it. Unfortunately, large amounts of noise as well as a highly varying image quality make the automatic tracking and segmentation of the mitral valve in two-dimensional echocardiographic videos challenging. In this paper, we present a fully automatic and unsupervised method for segmentation of the mitral valve in two-dimensional echocardiographic videos, independently of the echocardiographic view. We propose a bias-free variant of the robust non-negative matrix factorization (RNMF) along with a window-based localization approach, that is able to identify the mitral valve in several challenging situations. We improve the average f1-score on our dataset of 10 echocardiographic videos by 0.18 to a f1-score of 0.56.
Collapse
|
12
|
Zhao D, Quill GM, Gilbert K, Wang VY, Houle HC, Legget ME, Ruygrok PN, Doughty RN, Pedrosa J, D'hooge J, Young AA, Nash MP. Systematic Comparison of Left Ventricular Geometry Between 3D-Echocardiography and Cardiac Magnetic Resonance Imaging. Front Cardiovasc Med 2021; 8:728205. [PMID: 34616783 PMCID: PMC8488135 DOI: 10.3389/fcvm.2021.728205] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 08/18/2021] [Indexed: 01/25/2023] Open
Abstract
Aims: Left ventricular (LV) volumes estimated using three-dimensional echocardiography (3D-echo) have been reported to be smaller than those measured using cardiac magnetic resonance (CMR) imaging, but the underlying causes are not well-understood. We investigated differences in regional LV anatomy derived from these modalities and related subsequent findings to image characteristics. Methods and Results: Seventy participants (18 patients and 52 healthy participants) were imaged with 3D-echo and CMR (<1 h apart). Three-dimensional left ventricular models were constructed at end-diastole (ED) and end-systole (ES) from both modalities using previously validated software, enabling the fusion of CMR with 3D-echo by rigid registration. Regional differences were evaluated as mean surface distances for each of the 17 American Heart Association segments, and by comparing contours superimposed on images from each modality. In comparison to CMR-derived models, 3D-echo models underestimated LV end-diastolic volume (EDV) by -16 ± 22, -1 ± 25, and -18 ± 24 ml across three independent analysis methods. Average surface distance errors were largest in the basal-anterolateral segment (11-15 mm) and smallest in the mid-inferoseptal segment (6 mm). Larger errors were associated with signal dropout in anterior regions and the appearance of trabeculae at the lateral wall. Conclusions: Fusion of CMR and 3D-echo provides insight into the causes of volume underestimation by 3D-echo. Systematic signal dropout and differences in appearances of trabeculae lead to discrepancies in the delineation of LV geometry at anterior and lateral regions. A better understanding of error sources across modalities may improve correlation of clinical indices between 3D-echo and CMR.
Collapse
Affiliation(s)
- Debbie Zhao
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Gina M. Quill
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Kathleen Gilbert
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | - Vicky Y. Wang
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
| | | | - Malcolm E. Legget
- Department of Medicine, University of Auckland, Auckland, New Zealand
| | - Peter N. Ruygrok
- Department of Medicine, University of Auckland, Auckland, New Zealand
- Green Lane Cardiovascular Service, Auckland City Hospital, Auckland, New Zealand
| | - Robert N. Doughty
- Department of Medicine, University of Auckland, Auckland, New Zealand
- Green Lane Cardiovascular Service, Auckland City Hospital, Auckland, New Zealand
| | - João Pedrosa
- Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
| | - Jan D'hooge
- Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
| | - Alistair A. Young
- Department of Biomedical Engineering, King's College London, London, United Kingdom
- Department of Anatomy and Medical Imaging, University of Auckland, Auckland, New Zealand
| | - Martyn P. Nash
- Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand
- Department of Engineering Science, University of Auckland, Auckland, New Zealand
| |
Collapse
|
13
|
A New Semi-automated Algorithm for Volumetric Segmentation of the Left Ventricle in Temporal 3D Echocardiography Sequences. Cardiovasc Eng Technol 2021; 13:55-68. [PMID: 34046844 DOI: 10.1007/s13239-021-00547-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 05/13/2021] [Indexed: 10/21/2022]
Abstract
PURPOSE Echocardiography is commonly used as a non-invasive imaging tool in clinical practice for the assessment of cardiac function. However, delineation of the left ventricle is challenging due to the inherent properties of ultrasound imaging, such as the presence of speckle noise and the low signal-to-noise ratio. METHODS We propose a semi-automated segmentation algorithm for the delineation of the left ventricle in temporal 3D echocardiography sequences. The method requires minimal user interaction and relies on a diffeomorphic registration approach. Advantages of the method include no dependence on prior geometrical information, training data, or registration from an atlas. RESULTS The method was evaluated using three-dimensional ultrasound scan sequences from 18 patients from the Mazankowski Alberta Heart Institute, Edmonton, Canada, and compared to manual delineations provided by an expert cardiologist and four other registration algorithms. The segmentation approach yielded the following results over the cardiac cycle: a mean absolute difference of 1.01 (0.21) mm, a Hausdorff distance of 4.41 (1.43) mm, and a Dice overlap score of 0.93 (0.02). CONCLUSION The method performed well compared to the four other registration algorithms.
Collapse
|
14
|
Hosseini MS, Moradi MH. Adaptive fuzzy-SIFT rule-based registration for 3D cardiac motion estimation. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02430-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
15
|
Zamzmi G, Hsu LY, Li W, Sachdev V, Antani S. Harnessing Machine Intelligence in Automatic Echocardiogram Analysis: Current Status, Limitations, and Future Directions. IEEE Rev Biomed Eng 2021; 14:181-203. [PMID: 32305938 PMCID: PMC8077725 DOI: 10.1109/rbme.2020.2988295] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
Echocardiography (echo) is a critical tool in diagnosing various cardiovascular diseases. Despite its diagnostic and prognostic value, interpretation and analysis of echo images are still widely performed manually by echocardiographers. A plethora of algorithms has been proposed to analyze medical ultrasound data using signal processing and machine learning techniques. These algorithms provided opportunities for developing automated echo analysis and interpretation systems. The automated approach can significantly assist in decreasing the variability and burden associated with manual image measurements. In this paper, we review the state-of-the-art automatic methods for analyzing echocardiography data. Particularly, we comprehensively and systematically review existing methods of four major tasks: echo quality assessment, view classification, boundary segmentation, and disease diagnosis. Our review covers three echo imaging modes, which are B-mode, M-mode, and Doppler. We also discuss the challenges and limitations of current methods and outline the most pressing directions for future research. In summary, this review presents the current status of automatic echo analysis and discusses the challenges that need to be addressed to obtain robust systems suitable for efficient use in clinical settings or point-of-care testing.
Collapse
|
16
|
Qayyum A, Lalande A, Meriaudeau F. Automatic segmentation of tumors and affected organs in the abdomen using a 3D hybrid model for computed tomography imaging. Comput Biol Med 2020; 127:104097. [DOI: 10.1016/j.compbiomed.2020.104097] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 10/25/2020] [Accepted: 10/25/2020] [Indexed: 11/28/2022]
|
17
|
Smistad E, Ostvik A, Salte IM, Melichova D, Nguyen TM, Haugaa K, Brunvand H, Edvardsen T, Leclerc S, Bernard O, Grenne B, Lovstakken L. Real-Time Automatic Ejection Fraction and Foreshortening Detection Using Deep Learning. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2020; 67:2595-2604. [PMID: 32175861 DOI: 10.1109/tuffc.2020.2981037] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Volume and ejection fraction (EF) measurements of the left ventricle (LV) in 2-D echocardiography are associated with a high uncertainty not only due to interobserver variability of the manual measurement, but also due to ultrasound acquisition errors such as apical foreshortening. In this work, a real-time and fully automated EF measurement and foreshortening detection method is proposed. The method uses several deep learning components, such as view classification, cardiac cycle timing, segmentation and landmark extraction, to measure the amount of foreshortening, LV volume, and EF. A data set of 500 patients from an outpatient clinic was used to train the deep neural networks, while a separate data set of 100 patients from another clinic was used for evaluation, where LV volume and EF were measured by an expert using clinical protocols and software. A quantitative analysis using 3-D ultrasound showed that EF is considerably affected by apical foreshortening, and that the proposed method can detect and quantify the amount of apical foreshortening. The bias and standard deviation of the automatic EF measurements were -3.6 ± 8.1%, while the mean absolute difference was measured at 7.2% which are all within the interobserver variability and comparable with related studies. The proposed real-time pipeline allows for a continuous acquisition and measurement workflow without user interaction, and has the potential to significantly reduce the time spent on the analysis and measurement error due to foreshortening, while providing quantitative volume measurements in the everyday echo lab.
Collapse
|
18
|
Gu J, Fang Z, Gao Y, Tian F. Segmentation of coronary arteries images using global feature embedded network with active contour loss. Comput Med Imaging Graph 2020; 86:101799. [PMID: 33130419 DOI: 10.1016/j.compmedimag.2020.101799] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 06/06/2020] [Accepted: 09/24/2020] [Indexed: 10/23/2022]
Abstract
Coronary heart disease (CHD) is a serious disease that endangers human health and life. In recent years, the morbidity and mortality of CHD are increasing significantly. Because of the particularity and complexity of medical image, it is challenging to segment coronary artery accurately and efficiently. This paper proposes a novel global feature embedded network for better coronary arteries segmentation in 3D coronary computed tomography angiography (CTA) data. The global feature combines multi-level layers from various stages of the network, which contains semantic information and detailed features, aiming to accurately segment target with precise boundary. In addition, we integrate a group of improved noisy activating functions with parameters into our network to eliminate the impact of noise in CTA data. And we improve the learning active contour model, which obtains a refined segmentation result with smooth boundary based on the high-quality score map produced by the networks. The experimental results show that the proposed framework achieved the state-of-the-art performance intuitively and quantitively.
Collapse
Affiliation(s)
- Jia Gu
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| | - Zhijun Fang
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China.
| | - Yongbin Gao
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| | - Fangzheng Tian
- School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai, 201620, China
| |
Collapse
|
19
|
Roy R, Ghosh S, Ghosh A. Clinical ultrasound image standardization using histogram specification. Comput Biol Med 2020; 120:103746. [PMID: 32421650 DOI: 10.1016/j.compbiomed.2020.103746] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2019] [Revised: 03/19/2020] [Accepted: 04/02/2020] [Indexed: 11/25/2022]
Abstract
This article presents a novel ultrasound image standardization approach. This method aims to preserve the non-linear relationship in the echo-textures while ensuring the endurance in the transformed image. This is achieved by utilizing the concept of histogram specification. A reference cumulative distribution function (CDF) of a considered distribution is used to process the test images. Initially, the shape and scale parameter of the distribution are estimated for each type of echo-texture from the reference ultrasound images of a particular organ. These parameters are used to estimate the prototype parameter set. The obtained prototype parameter set, along with a distribution function, is then used to construct a reference CDF. This CDF, in turn, is used as a transfer function in the histogram specification technique for standardizing the given input image. The efficiency and stability of the proposed approach are investigated and compared with the linear scaling technique. Four measures are used to evaluate the algorithms on three data sets. The results show that the proposed approach provides better standardization of images and is invariant to the gain of the scanning device as opposed to linear scaling.
Collapse
Affiliation(s)
- Rahul Roy
- Department of Computer Science and Engineering, National Institute of Science and Technology, Berhampur, India.
| | - Susmita Ghosh
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, India.
| | - Ashish Ghosh
- Machine Intelligence Unit, Indian Statistical Institute, Kolkata 700108, India.
| |
Collapse
|
20
|
MV-RAN: Multiview recurrent aggregation network for echocardiographic sequences segmentation and full cardiac cycle analysis. Comput Biol Med 2020; 120:103728. [DOI: 10.1016/j.compbiomed.2020.103728] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Revised: 03/21/2020] [Accepted: 03/21/2020] [Indexed: 11/24/2022]
|
21
|
|
22
|
Dong S, Luo G, Tam C, Wang W, Wang K, Cao S, Chen B, Zhang H, Li S. Deep Atlas Network for Efficient 3D Left Ventricle Segmentation on Echocardiography. Med Image Anal 2020; 61:101638. [DOI: 10.1016/j.media.2020.101638] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Revised: 01/06/2020] [Accepted: 01/09/2020] [Indexed: 10/25/2022]
|
23
|
Zhang L, Lu L, Wang X, Zhu RM, Bagheri M, Summers RM, Yao J. Spatio-Temporal Convolutional LSTMs for Tumor Growth Prediction by Learning 4D Longitudinal Patient Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1114-1126. [PMID: 31562074 PMCID: PMC7213057 DOI: 10.1109/tmi.2019.2943841] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Prognostic tumor growth modeling via volumetric medical imaging observations can potentially lead to better outcomes of tumor treatment management and surgical planning. Recent advances of convolutional networks (ConvNets) have demonstrated higher accuracy than traditional mathematical models can be achieved in predicting future tumor volumes. This indicates that deep learning based data-driven techniques may have great potentials on addressing such problem. However, current 2D image patch based modeling approaches can not make full use of the spatio-temporal imaging context of the tumor's longitudinal 4D (3D + time) patient data. Moreover, they are incapable to predict clinically-relevant tumor properties, other than the tumor volumes. In this paper, we exploit to formulate the tumor growth process through convolutional Long Short-Term Memory (ConvLSTM) that extract tumor's static imaging appearances and simultaneously capture its temporal dynamic changes within a single network. We extend ConvLSTM into the spatio-temporal domain (ST-ConvLSTM) by jointly learning the inter-slice 3D contexts and the longitudinal or temporal dynamics from multiple patient studies. Our approach can incorporate other non-imaging patient information in an end-to-end trainable manner. Experiments are conducted on the largest 4D longitudinal tumor dataset of 33 patients to date. Results validate that the proposed ST-ConvLSTM model produces a Dice score of 83.2%±5.1% and a RVD of 11.2%±10.8%, both statistically significantly outperforming (p < 0.05) other compared methods of traditional linear model, ConvLSTM, and generative adversarial network (GAN) under the metric of predicting future tumor volumes. Additionally, our new method enables the prediction of both cell density and CT intensity numbers. Last, we demonstrate the generalizability of ST-ConvLSTM by employing it in 4D medical image segmentation task, which achieves an averaged Dice score of 86.3%±1.2% for left-ventricle segmentation in 4D ultrasound with 3 seconds per patient case.
Collapse
|
24
|
Chen C, Qin C, Qiu H, Tarroni G, Duan J, Bai W, Rueckert D. Deep Learning for Cardiac Image Segmentation: A Review. Front Cardiovasc Med 2020; 7:25. [PMID: 32195270 PMCID: PMC7066212 DOI: 10.3389/fcvm.2020.00025] [Citation(s) in RCA: 355] [Impact Index Per Article: 71.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2019] [Accepted: 02/17/2020] [Indexed: 12/15/2022] Open
Abstract
Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound and major anatomical structures of interest (ventricles, atria, and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research.
Collapse
Affiliation(s)
- Chen Chen
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Chen Qin
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Huaqi Qiu
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| | - Giacomo Tarroni
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
- CitAI Research Centre, Department of Computer Science, City University of London, London, United Kingdom
| | - Jinming Duan
- School of Computer Science, University of Birmingham, Birmingham, United Kingdom
| | - Wenjia Bai
- Data Science Institute, Imperial College London, London, United Kingdom
- Department of Brain Sciences, Faculty of Medicine, Imperial College London, London, United Kingdom
| | - Daniel Rueckert
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, London, United Kingdom
| |
Collapse
|
25
|
Xu L, Liu M, Shen Z, Wang H, Liu X, Wang X, Wang S, Li T, Yu S, Hou M, Guo J, Zhang J, He Y. DW-Net: A cascaded convolutional neural network for apical four-chamber view segmentation in fetal echocardiography. Comput Med Imaging Graph 2019; 80:101690. [PMID: 31968286 DOI: 10.1016/j.compmedimag.2019.101690] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 12/19/2019] [Accepted: 12/20/2019] [Indexed: 01/22/2023]
Abstract
Fetal echocardiography (FE) is a widely used medical examination for early diagnosis of congenital heart disease (CHD). The apical four-chamber view (A4C) is an important view among early FE images. Accurate segmentation of crucial anatomical structures in the A4C view is a useful and important step for early diagnosis and timely treatment of CHDs. However, it is a challenging task due to several unfavorable factors: (a) artifacts and speckle noise produced by ultrasound imaging. (b) category confusion caused by the similarity of anatomical structures and variations of scanning angles. (c) missing boundaries. In this paper, we propose an end-to-end DW-Net for accurate segmentation of seven important anatomical structures in the A4C view. The network comprises two components: 1) a Dilated Convolutional Chain (DCC) for "gridding issue" reduction, multi-scale contextual information aggregation and accurate localization of cardiac chambers. 2) a W-Net for gaining more precise boundaries and yielding refined segmentation results. Extensive experiments of the proposed method on a dataset of 895 A4C views have demonstrated that DW-Net can achieve good segmentation results, including the Dice Similarity Coefficient (DSC) of 0.827, the Pixel Accuracy (PA) of 0.933, the AUC of 0.990 and it substantially outperformed some well-known segmentation methods. Our work was highly valued by experienced clinicians. The accurate and automatic segmentation of the A4C view using the proposed DW-Net can benefit further extractions of useful clinical indicators in early FE and improve the prenatal diagnostic accuracy and efficiency of CHDs.
Collapse
Affiliation(s)
- Lu Xu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Heifei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China; School of Biomedical Engineering, Anhui Medical University, Heifei, China
| | - Mingyuan Liu
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Heifei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China; School of Biomedical Engineering, Anhui Medical University, Heifei, China
| | - Zhenrong Shen
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Heifei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China; School of Biomedical Engineering, Anhui Medical University, Heifei, China
| | - Hua Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Heifei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China; School of Biomedical Engineering, Anhui Medical University, Heifei, China
| | - Xiaowei Liu
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Xin Wang
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Siyu Wang
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Tiefeng Li
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Shaomei Yu
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Min Hou
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Jianhua Guo
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China
| | - Jicong Zhang
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China; Heifei Innovation Research Institute, Beihang University, Hefei, China; Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing, China; Beijing Advanced Innovation Centre for Big Data-Based Precision Medicine, Beihang University, Beijing, China; School of Biomedical Engineering, Anhui Medical University, Heifei, China.
| | - Yihua He
- Department of Ultrasound, Beijing Anzhen Hospital, Capital Medical University, Beijing, China.
| |
Collapse
|
26
|
Al-Kadi OS. Spatio-Temporal Segmentation in 3-D Echocardiographic Sequences Using Fractional Brownian Motion. IEEE Trans Biomed Eng 2019; 67:2286-2296. [PMID: 31831403 DOI: 10.1109/tbme.2019.2958701] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
An important aspect for an improved cardiac functional analysis is the accurate segmentation of the left ventricle (LV). A novel approach for fully-automated segmentation of the LV endocardium and epicardium contours is presented. This is mainly based on the natural physical characteristics of the LV shape structure. Both sides of the LV boundaries exhibit natural elliptical curvatures by having details on various scales, i.e. exhibiting fractal-like characteristics. The fractional Brownian motion (fBm), which is a non-stationary stochastic process, integrates well with the stochastic nature of ultrasound echoes. It has the advantage of representing a wide range of non-stationary signals and can quantify statistical local self-similarity throughout the time-sequence ultrasound images. The locally characterized boundaries of the fBm segmented LV were further iteratively refined using global information by means of second-order moments. The method is benchmarked using synthetic 3D+time echocardiographic sequences for normal and different ischemic cardiomyopathy, and results compared with state-of-the-art LV segmentation. Furthermore, the framework was validated against real data from canine cases with expert-defined segmentations and demonstrated improved accuracy. The fBm-based segmentation algorithm is fully automatic and has the potential to be used clinically together with 3D echocardiography for improved cardiovascular disease diagnosis.
Collapse
|
27
|
Tractography and machine learning: Current state and open challenges. Magn Reson Imaging 2019; 64:37-48. [PMID: 31078615 DOI: 10.1016/j.mri.2019.04.013] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 04/29/2019] [Accepted: 04/29/2019] [Indexed: 12/28/2022]
|
28
|
MFP-Unet: A novel deep learning based approach for left ventricle segmentation in echocardiography. Phys Med 2019; 67:58-69. [DOI: 10.1016/j.ejmp.2019.10.001] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 09/08/2019] [Accepted: 10/01/2019] [Indexed: 11/18/2022] Open
|
29
|
Leclerc S, Smistad E, Pedrosa J, Ostvik A, Cervenansky F, Espinosa F, Espeland T, Berg EAR, Jodoin PM, Grenier T, Lartizien C, Dhooge J, Lovstakken L, Bernard O. Deep Learning for Segmentation Using an Open Large-Scale Dataset in 2D Echocardiography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2198-2210. [PMID: 30802851 DOI: 10.1109/tmi.2019.2900516] [Citation(s) in RCA: 223] [Impact Index Per Article: 37.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Delineation of the cardiac structures from 2D echocardiographic images is a common clinical task to establish a diagnosis. Over the past decades, the automation of this task has been the subject of intense research. In this paper, we evaluate how far the state-of-the-art encoder-decoder deep convolutional neural network methods can go at assessing 2D echocardiographic images, i.e., segmenting cardiac structures and estimating clinical indices, on a dataset, especially, designed to answer this objective. We, therefore, introduce the cardiac acquisitions for multi-structure ultrasound segmentation dataset, the largest publicly-available and fully-annotated dataset for the purpose of echocardiographic assessment. The dataset contains two and four-chamber acquisitions from 500 patients with reference measurements from one cardiologist on the full dataset and from three cardiologists on a fold of 50 patients. Results show that encoder-decoder-based architectures outperform state-of-the-art non-deep learning methods and faithfully reproduce the expert analysis for the end-diastolic and end-systolic left ventricular volumes, with a mean correlation of 0.95 and an absolute mean error of 9.5 ml. Concerning the ejection fraction of the left ventricle, results are more contrasted with a mean correlation coefficient of 0.80 and an absolute mean error of 5.6%. Although these results are below the inter-observer scores, they remain slightly worse than the intra-observer's ones. Based on this observation, areas for improvement are defined, which open the door for accurate and fully-automatic analysis of 2D echocardiographic images.
Collapse
|
30
|
Litjens G, Ciompi F, Wolterink JM, de Vos BD, Leiner T, Teuwen J, Išgum I. State-of-the-Art Deep Learning in Cardiovascular Image Analysis. JACC Cardiovasc Imaging 2019; 12:1549-1565. [DOI: 10.1016/j.jcmg.2019.06.009] [Citation(s) in RCA: 141] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Revised: 05/13/2019] [Accepted: 06/13/2019] [Indexed: 02/07/2023]
|
31
|
Müller S, Farag I, Weickert J, Braun Y, Lollert A, Dobberstein J, Hötker A, Graf N. Benchmarking Wilms' tumor in multisequence MRI data: why does current clinical practice fail? Which popular segmentation algorithms perform well? J Med Imaging (Bellingham) 2019; 6:034001. [PMID: 31338388 DOI: 10.1117/1.jmi.6.3.034001] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2019] [Accepted: 06/24/2019] [Indexed: 11/14/2022] Open
Abstract
Wilms' tumor is one of the most frequent malignant solid tumors in childhood. Accurate segmentation of tumor tissue is a key step during therapy and treatment planning. Since it is difficult to obtain a comprehensive set of tumor data of children, there is no benchmark so far allowing evaluation of the quality of human or computer-based segmentations. The contributions in our paper are threefold: (i) we present the first heterogeneous Wilms' tumor benchmark data set. It contains multisequence MRI data sets before and after chemotherapy, along with ground truth annotation, approximated based on the consensus of five human experts. (ii) We analyze human expert annotations and interrater variability, finding that the current clinical practice of determining tumor volume is inaccurate and that manual annotations after chemotherapy may differ substantially. (iii) We evaluate six computer-based segmentation methods, ranging from classical approaches to recent deep-learning techniques. We show that the best ones offer a quality comparable to human expert annotations.
Collapse
Affiliation(s)
- Sabine Müller
- Saarland University, Medical Center, Department of Pediatric Oncology and Hematology, Homburg, Germany.,Saarland University, Faculty of Mathematics and Computer Science, Mathematical Image Analysis Group, Saarbrücken, Germany
| | - Iva Farag
- Saarland University, Medical Center, Department of Pediatric Oncology and Hematology, Homburg, Germany
| | - Joachim Weickert
- Saarland University, Faculty of Mathematics and Computer Science, Mathematical Image Analysis Group, Saarbrücken, Germany
| | - Yvonne Braun
- Saarland University, Medical Center, Department of Pediatric Oncology and Hematology, Homburg, Germany
| | - André Lollert
- Johannes Gutenberg University, Medical Center, Department of Diagnostic and Interventional Radiology, Mainz, Germany
| | - Jonas Dobberstein
- Saarland University, Medical Center, Department of Pediatric Oncology and Hematology, Homburg, Germany
| | - Andreas Hötker
- University Hospital Zürich, Department of Diagnostic Radiology, Zürich, Switzerland
| | - Norbert Graf
- Saarland University, Medical Center, Department of Pediatric Oncology and Hematology, Homburg, Germany
| |
Collapse
|
32
|
Krittanawong C, Johnson KW, Rosenson RS, Wang Z, Aydar M, Baber U, Min JK, Tang WHW, Halperin JL, Narayan SM. Deep learning for cardiovascular medicine: a practical primer. Eur Heart J 2019; 40:2058-2073. [PMID: 30815669 PMCID: PMC6600129 DOI: 10.1093/eurheartj/ehz056] [Citation(s) in RCA: 188] [Impact Index Per Article: 31.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2018] [Revised: 11/02/2018] [Accepted: 01/22/2019] [Indexed: 12/23/2022] Open
Abstract
Deep learning (DL) is a branch of machine learning (ML) showing increasing promise in medicine, to assist in data classification, novel disease phenotyping and complex decision making. Deep learning is a form of ML typically implemented via multi-layered neural networks. Deep learning has accelerated by recent advances in computer hardware and algorithms and is increasingly applied in e-commerce, finance, and voice and image recognition to learn and classify complex datasets. The current medical literature shows both strengths and limitations of DL. Strengths of DL include its ability to automate medical image interpretation, enhance clinical decision-making, identify novel phenotypes, and select better treatment pathways in complex diseases. Deep learning may be well-suited to cardiovascular medicine in which haemodynamic and electrophysiological indices are increasingly captured on a continuous basis by wearable devices as well as image segmentation in cardiac imaging. However, DL also has significant weaknesses including difficulties in interpreting its models (the 'black-box' criticism), its need for extensive adjudicated ('labelled') data in training, lack of standardization in design, lack of data-efficiency in training, limited applicability to clinical trials, and other factors. Thus, the optimal clinical application of DL requires careful formulation of solvable problems, selection of most appropriate DL algorithms and data, and balanced interpretation of results. This review synthesizes the current state of DL for cardiovascular clinicians and investigators, and provides technical context to appreciate the promise, pitfalls, near-term challenges, and opportunities for this exciting new area.
Collapse
Affiliation(s)
- Chayakrit Krittanawong
- Department of Internal Medicine, Icahn School of Medicine at Mount Sinai, 1 Gustave L. Levy Pl, New York, NY, USA
- Department of Cardiovascular Diseases, Icahn School of Medicine at Mount Sinai, Mount Sinai Hospital, Mount Sinai Heart, New York, NY, USA
| | - Kipp W Johnson
- Department of Genetics and Genomic Sciences, Institute for Next Generation Healthcare, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Robert S Rosenson
- Department of Cardiovascular Diseases, Icahn School of Medicine at Mount Sinai, Mount Sinai Hospital, Mount Sinai Heart, New York, NY, USA
| | - Zhen Wang
- Robert D. and Patricia E. Kern Center for the Science of Health Care Delivery, Mayo Clinic, Rochester, MN, USA
- Division of Health Care Policy and Research, Department of Health Sciences Research, Mayo Clinic, Rochester, MN, USA
| | - Mehmet Aydar
- Department of Computer Science, Kent State University, Kent, OH, USA
| | - Usman Baber
- Department of Cardiovascular Diseases, Icahn School of Medicine at Mount Sinai, Mount Sinai Hospital, Mount Sinai Heart, New York, NY, USA
| | - James K Min
- Department of Radiology, New York-Presbyterian Hospital and Weill Cornell Medicine, New York, NY, USA
| | - W H Wilson Tang
- Department of Cardiovascular Medicine, Heart and Vascular Institute, Cleveland Clinic, OH, USA
- Department of Cellular and Molecular Medicine, Lerner Research Institute, Cleveland, OH, USA
- Center for Clinical Genomics, Cleveland Clinic, Cleveland, OH, USA
| | - Jonathan L Halperin
- Department of Cardiovascular Diseases, Icahn School of Medicine at Mount Sinai, Mount Sinai Hospital, Mount Sinai Heart, New York, NY, USA
| | - Sanjiv M Narayan
- Cardiovascular Institute and Department of Cardiovascular Medicine, Stanford University Medical Center, Stanford, CA, USA
| |
Collapse
|
33
|
Maier-Hein L, Eisenmann M, Reinke A, Onogur S, Stankovic M, Scholz P, Arbel T, Bogunovic H, Bradley AP, Carass A, Feldmann C, Frangi AF, Full PM, van Ginneken B, Hanbury A, Honauer K, Kozubek M, Landman BA, März K, Maier O, Maier-Hein K, Menze BH, Müller H, Neher PF, Niessen W, Rajpoot N, Sharp GC, Sirinukunwattana K, Speidel S, Stock C, Stoyanov D, Taha AA, van der Sommen F, Wang CW, Weber MA, Zheng G, Jannin P, Kopp-Schneider A. Why rankings of biomedical image analysis competitions should be interpreted with care. Nat Commun 2018; 9:5217. [PMID: 30523263 PMCID: PMC6284017 DOI: 10.1038/s41467-018-07619-7] [Citation(s) in RCA: 166] [Impact Index Per Article: 23.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2018] [Accepted: 11/07/2018] [Indexed: 11/08/2022] Open
Abstract
International challenges have become the standard for validation of biomedical image analysis methods. Given their scientific impact, it is surprising that a critical analysis of common practices related to the organization of challenges has not yet been performed. In this paper, we present a comprehensive analysis of biomedical image analysis challenges conducted up to now. We demonstrate the importance of challenges and show that the lack of quality control has critical consequences. First, reproducibility and interpretation of the results is often hampered as only a fraction of relevant information is typically provided. Second, the rank of an algorithm is generally not robust to a number of variables such as the test data used for validation, the ranking scheme applied and the observers that make the reference annotations. To overcome these problems, we recommend best practice guidelines and define open research questions to be addressed in the future.
Collapse
Affiliation(s)
- Lena Maier-Hein
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany.
| | - Matthias Eisenmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Annika Reinke
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Sinan Onogur
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Marko Stankovic
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Patrick Scholz
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Tal Arbel
- Centre for Intelligent Machines, McGill University, Montreal, QC, H3A0G4, Canada
| | - Hrvoje Bogunovic
- Christian Doppler Laboratory for Ophthalmic Image Analysis, Department of Ophthalmology, Medical University Vienna, 1090, Vienna, Austria
| | - Andrew P Bradley
- Science and Engineering Faculty, Queensland University of Technology, Brisbane, QLD, 4001, Australia
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Department of Computer Science, Johns Hopkins University, Baltimore, MD, 21218, USA
| | - Carolin Feldmann
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Alejandro F Frangi
- CISTIB - Center for Computational Imaging & Simulation Technologies in Biomedicine, The University of Leeds, Leeds, Yorkshire, LS2 9JT, UK
| | - Peter M Full
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Bram van Ginneken
- Department of Radiology and Nuclear Medicine, Medical Image Analysis, Radboud University Center, 6525 GA, Nijmegen, The Netherlands
| | - Allan Hanbury
- Institute of Information Systems Engineering, TU Wien, 1040, Vienna, Austria
- Complexity Science Hub Vienna, 1080, Vienna, Austria
| | - Katrin Honauer
- Heidelberg Collaboratory for Image Processing (HCI), Heidelberg University, 69120, Heidelberg, Germany
| | - Michal Kozubek
- Centre for Biomedical Image Analysis, Masaryk University, 60200, Brno, Czech Republic
| | - Bennett A Landman
- Electrical Engineering, Vanderbilt University, Nashville, TN, 37235-1679, USA
| | - Keno März
- Division of Computer Assisted Medical Interventions (CAMI), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Oskar Maier
- Institute of Medical Informatics, Universität zu Lübeck, 23562, Lübeck, Germany
| | - Klaus Maier-Hein
- Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Bjoern H Menze
- Institute for Advanced Studies, Department of Informatics, Technical University of Munich, 80333, Munich, Germany
| | - Henning Müller
- Information System Institute, HES-SO, Sierre, 3960, Switzerland
| | - Peter F Neher
- Division of Medical Image Computing (MIC), German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Wiro Niessen
- Departments of Radiology, Nuclear Medicine and Medical Informatics, Erasmus MC, 3015 GD, Rotterdam, The Netherlands
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK
| | - Gregory C Sharp
- Department of Radiation Oncology, Massachusetts General Hospital, Boston, MA, 02114, USA
| | | | - Stefanie Speidel
- Division of Translational Surgical Oncology (TCO), National Center for Tumor Diseases Dresden, 01307, Dresden, Germany
| | - Christian Stock
- Division of Clinical Epidemiology and Aging Research, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| | - Danail Stoyanov
- Centre for Medical Image Computing (CMIC) & Department of Computer Science, University College London, London, W1W 7TS, UK
| | - Abdel Aziz Taha
- Data Science Studio, Research Studios Austria FG, 1090, Vienna, Austria
| | - Fons van der Sommen
- Department of Electrical Engineering, Eindhoven University of Technology, 5600 MB, Eindhoven, The Netherlands
| | - Ching-Wei Wang
- AIExplore, NTUST Center of Computer Vision and Medical Imaging, Graduate Institute of Biomedical Engineering, National Taiwan University of Science and Technology, Taipei, 106, Taiwan
| | - Marc-André Weber
- Institute of Diagnostic and Interventional Radiology, University Medical Center Rostock, 18051, Rostock, Germany
| | - Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, Bern, 3014, Switzerland
| | - Pierre Jannin
- Univ Rennes, Inserm, LTSI (Laboratoire Traitement du Signal et de l'Image) - UMR_S 1099, Rennes, 35043, Cedex, France
| | - Annette Kopp-Schneider
- Division of Biostatistics, German Cancer Research Center (DKFZ), 69120, Heidelberg, Germany
| |
Collapse
|
34
|
Milletari F, Frei J, Aboulatta M, Vivar G, Ahmadi SA. Cloud Deployment of High-Resolution Medical Image Analysis With TOMAAT. IEEE J Biomed Health Inform 2018; 23:969-977. [PMID: 30530377 DOI: 10.1109/jbhi.2018.2885214] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
BACKGROUND Deep learning has been recently applied to a multitude of computer vision and medical image analysis problems. Although recent research efforts have improved the state of the art, most of the methods cannot be easily accessed, compared or used by other researchers or clinicians. Even if developers publish their code and pre-trained models on the internet, integration in stand-alone applications and existing workflows is often not straightforward, especially for clinical research partners. In this paper, we propose an open-source framework to provide AI-enabled medical image analysis through the network. METHODS TOMAAT provides a cloud environment for general medical image analysis, composed of three basic components: (i) an announcement service, maintaining a public registry of (ii) multiple distributed server nodes offering various medical image analysis solutions, and (iii) client software offering simple interfaces for users. Deployment is realized through HTTP-based communication, along with an API and wrappers for common image manipulations during pre- and post-processing. RESULTS We demonstrate the utility and versatility of TOMAAT on several hallmark medical image analysis tasks: segmentation, diffeomorphic deformable atlas registration, landmark localization, and workflow integration. Through TOMAAT, the high hardware demands, setup and model complexity of demonstrated approaches are transparent to users, who are provided with simple client interfaces. We present example clients in three-dimensional Slicer, in the web browser, on iOS devices and in a commercially available, certified medical image analysis suite. CONCLUSION TOMAAT enables deployment of state-of-the-art image segmentation in the cloud, fostering interaction among deep learning researchers and medical collaborators in the clinic. Currently, a public announcement service is hosted by the authors, and several ready-to-use services are registered and enlisted at http://tomaat.cloud.
Collapse
|
35
|
Alsharqi M, Woodward WJ, Mumith JA, Markham DC, Upton R, Leeson P. Artificial intelligence and echocardiography. Echo Res Pract 2018; 5:R115-R125. [PMID: 30400053 PMCID: PMC6280250 DOI: 10.1530/erp-18-0056] [Citation(s) in RCA: 111] [Impact Index Per Article: 15.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Accepted: 10/29/2018] [Indexed: 12/27/2022] Open
Abstract
Echocardiography plays a crucial role in the diagnosis and management of cardiovascular disease. However, interpretation remains largely reliant on the subjective expertise of the operator. As a result inter-operator variability and experience can lead to incorrect diagnoses. Artificial intelligence (AI) technologies provide new possibilities for echocardiography to generate accurate, consistent and automated interpretation of echocardiograms, thus potentially reducing the risk of human error. In this review, we discuss a subfield of AI relevant to image interpretation, called machine learning, and its potential to enhance the diagnostic performance of echocardiography. We discuss recent applications of these methods and future directions for AI-assisted interpretation of echocardiograms. The research suggests it is feasible to apply machine learning models to provide rapid, highly accurate and consistent assessment of echocardiograms, comparable to clinicians. These algorithms are capable of accurately quantifying a wide range of features, such as the severity of valvular heart disease or the ischaemic burden in patients with coronary artery disease. However, the applications and their use are still in their infancy within the field of echocardiography. Research to refine methods and validate their use for automation, quantification and diagnosis are in progress. Widespread adoption of robust AI tools in clinical echocardiography practice should follow and have the potential to deliver significant benefits for patient outcome.
Collapse
Affiliation(s)
- M Alsharqi
- Oxford Cardiovascular Clinical Research Facility, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, UK
| | - W J Woodward
- Oxford Cardiovascular Clinical Research Facility, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, UK
| | - J A Mumith
- Ultromics Ltd, Magdalen Centre, Robert Robinson Ave, Oxford, United Kingdom
| | - D C Markham
- Ultromics Ltd, Magdalen Centre, Robert Robinson Ave, Oxford, United Kingdom
| | - R Upton
- Ultromics Ltd, Magdalen Centre, Robert Robinson Ave, Oxford, United Kingdom
| | - P Leeson
- Oxford Cardiovascular Clinical Research Facility, Division of Cardiovascular Medicine, Radcliffe Department of Medicine, University of Oxford, Oxford, UK
| |
Collapse
|
36
|
Zhang L, Gooya A, Pereanez M, Dong B, Piechnik S, Neubauer S, Petersen S, Frangi AF. Automatic Assessment of Full Left Ventricular Coverage in Cardiac Cine Magnetic Resonance Imaging with Fisher Discriminative 3D CNN. IEEE Trans Biomed Eng 2018; 66:1975-1986. [PMID: 30475705 DOI: 10.1109/tbme.2018.2881952] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/28/2024]
Abstract
Cardiac magnetic resonance (CMR) images play a growing role in the diagnostic imaging of cardiovascular diseases. Full coverage of the left ventricle (LV), from base to apex, is a basic criterion for CMR image quality and necessary for accurate measurement of cardiac volume and functional assessment. Incomplete coverage of the LV is identified through visual inspection, which is time-consuming and usually done retrospectively in the assessment of large imaging cohorts. This paper proposes a novel automatic method for determining LV coverage from CMR images by using Fisher-discriminative three-dimensional (FD3D) convolutional neural networks (CNNs). In contrast to our previous method employing 2D CNNs, this approach utilizes spatial contextual information in CMR volumes, extracts more representative high-level features and enhances the discriminative capacity of the baseline 2D CNN learning framework, thus achieving superior detection accuracy. A two-stage framework is proposed to identify missing basal and apical slices in measurements of CMR volume. First, the FD3D CNN extracts high-level features from the CMR stacks. These image representations are then used to detect the missing basal and apical slices. Compared to the traditional 3D CNN strategy, the proposed FD3D CNN minimizes within-class scatter and maximizes between-class scatter. We performed extensive experiments to validate the proposed method on more than 5,000 independent volumetric CMR scans from the UK Biobank study, achieving low error rates for missing basal/apical slice detection (4.9%/4.6%). The proposed method can also be adopted for assessing LV coverage for other types of CMR image data.
Collapse
|
37
|
Dong S, Luo G, Wang K, Cao S, Li Q, Zhang H. A Combined Fully Convolutional Networks and Deformable Model for Automatic Left Ventricle Segmentation Based on 3D Echocardiography. BIOMED RESEARCH INTERNATIONAL 2018; 2018:5682365. [PMID: 30276211 PMCID: PMC6151364 DOI: 10.1155/2018/5682365] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 06/17/2018] [Accepted: 07/29/2018] [Indexed: 11/17/2022]
Abstract
Segmentation of the left ventricle (LV) from three-dimensional echocardiography (3DE) plays a key role in the clinical diagnosis of the LV function. In this work, we proposed a new automatic method for the segmentation of LV, based on the fully convolutional networks (FCN) and deformable model. This method implemented a coarse-to-fine framework. Firstly, a new deep fusion network based on feature fusion and transfer learning, combining the residual modules, was proposed to achieve coarse segmentation of LV on 3DE. Secondly, we proposed a method of geometrical model initialization for a deformable model based on the results of coarse segmentation. Thirdly, the deformable model was implemented to further optimize the segmentation results with a regularization item to avoid the leakage between left atria and left ventricle to achieve the goal of fine segmentation of LV. Numerical experiments have demonstrated that the proposed method outperforms the state-of-the-art methods on the challenging CETUS benchmark in the segmentation accuracy and has a potential for practical applications.
Collapse
Affiliation(s)
- Suyu Dong
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
| | - Gongning Luo
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
| | - Kuanquan Wang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
| | - Shaodong Cao
- Department of Radiology, The Fourth Hospital of Harbin Medical University, Harbin 150001, China
| | - Qince Li
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
| | - Henggui Zhang
- School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001, China
- School of Physics and Astronomy, University of Manchester, Manchester, UK
- Space Institute of Southern China, Shenzhen, Guangdong, China
| |
Collapse
|
38
|
Zotti C, Luo Z, Lalande A, Jodoin PM. Convolutional Neural Network With Shape Prior Applied to Cardiac MRI Segmentation. IEEE J Biomed Health Inform 2018; 23:1119-1128. [PMID: 30113903 DOI: 10.1109/jbhi.2018.2865450] [Citation(s) in RCA: 80] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In this paper, we present a novel convolutional neural network architecture to segment images from a series of short-axis cardiac magnetic resonance slices (CMRI). The proposed model is an extension of the U-net that embeds a cardiac shape prior and involves a loss function tailored to the cardiac anatomy. Since the shape prior is computed offline only once, the execution of our model is not limited by its calculation. Our system takes as input raw magnetic resonance images, requires no manual preprocessing or image cropping and is trained to segment the endocardium and epicardium of the left ventricle, the endocardium of the right ventricle, as well as the center of the left ventricle. With its multiresolution grid architecture, the network learns both high and low-level features useful to register the shape prior as well as accurately localize the borders of the cardiac regions. Experimental results obtained on the Automatic Cardiac Diagnostic Challenge - Medical Image Computing and Computer Assisted Intervention (ACDC-MICCAI) 2017 dataset show that our model segments multislices CMRI (left and right ventricle contours) in 0.18 s with an average Dice coefficient of [Formula: see text] and an average 3-D Hausdorff distance of [Formula: see text] mm.
Collapse
|
39
|
Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, Cook SA, de Marvao A, Dawes T, O'Regan DP, Kainz B, Glocker B, Rueckert D. Anatomically Constrained Neural Networks (ACNNs): Application to Cardiac Image Enhancement and Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:384-395. [PMID: 28961105 DOI: 10.1109/tmi.2017.2743464] [Citation(s) in RCA: 274] [Impact Index Per Article: 39.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning-based techniques. However, in most recent and promising techniques such as CNN-based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac data sets and public benchmarks. In addition, we demonstrate how the learnt deep models of 3-D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.
Collapse
|
40
|
Ilunga-Mbuyamba E, Avina-Cervantes JG, Lindner D, Arlt F, Ituna-Yudonago JF, Chalopin C. Patient-specific model-based segmentation of brain tumors in 3D intraoperative ultrasound images. Int J Comput Assist Radiol Surg 2018; 13:331-342. [PMID: 29330658 DOI: 10.1007/s11548-018-1703-0] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 01/04/2018] [Indexed: 11/27/2022]
Abstract
PURPOSE Intraoperative ultrasound (iUS) imaging is commonly used to support brain tumor operation. The tumor segmentation in the iUS images is a difficult task and still under improvement because of the low signal-to-noise ratio. The success of automatic methods is also limited due to the high noise sensibility. Therefore, an alternative brain tumor segmentation method in 3D-iUS data using a tumor model obtained from magnetic resonance (MR) data for local MR-iUS registration is presented in this paper. The aim is to enhance the visualization of the brain tumor contours in iUS. METHODS A multistep approach is proposed. First, a region of interest (ROI) based on the specific patient tumor model is defined. Second, hyperechogenic structures, mainly tumor tissues, are extracted from the ROI of both modalities by using automatic thresholding techniques. Third, the registration is performed over the extracted binary sub-volumes using a similarity measure based on gradient values, and rigid and affine transformations. Finally, the tumor model is aligned with the 3D-iUS data, and its contours are represented. RESULTS Experiments were successfully conducted on a dataset of 33 patients. The method was evaluated by comparing the tumor segmentation with expert manual delineations using two binary metrics: contour mean distance and Dice index. The proposed segmentation method using local and binary registration was compared with two grayscale-based approaches. The outcomes showed that our approach reached better results in terms of computational time and accuracy than the comparative methods. CONCLUSION The proposed approach requires limited interaction and reduced computation time, making it relevant for intraoperative use. Experimental results and evaluations were performed offline. The developed tool could be useful for brain tumor resection supporting neurosurgeons to improve tumor border visualization in the iUS volumes.
Collapse
Affiliation(s)
- Elisee Ilunga-Mbuyamba
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| | - Juan Gabriel Avina-Cervantes
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico.
| | - Dirk Lindner
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Felix Arlt
- Department of Neurosurgery, University Hospital Leipzig, 04103, Leipzig, Germany
| | - Jean Fulbert Ituna-Yudonago
- CA Telematics, Engineering Division, Campus Irapuato-Salamanca, University of Guanajuato, Carr. Salamanca-Valle de Santiago km 3.5 + 1.8, Comunidad de Palo Blanco, 36885, Salamanca, Mexico
| | - Claire Chalopin
- Innovation Center Computer Assisted Surgery (ICCAS), University of Leipzig, 04103, Leipzig, Germany
| |
Collapse
|
41
|
Dong S, Luo G, Wang K, Cao S, Mercado A, Shmuilovich O, Zhang H, Li S. VoxelAtlasGAN: 3D Left Ventricle Segmentation on Echocardiography with Atlas Guided Generation and Voxel-to-Voxel Discrimination. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00937-3_71] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2023]
|
42
|
Queirós S, Vilaça JL, Morais P, Fonseca JC, D'hooge J, Barbosa D. Fast left ventricle tracking using localized anatomical affine optical flow. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2017; 33. [PMID: 28208231 DOI: 10.1002/cnm.2871] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Accepted: 02/12/2017] [Indexed: 06/06/2023]
Abstract
In daily clinical cardiology practice, left ventricle (LV) global and regional function assessment is crucial for disease diagnosis, therapy selection, and patient follow-up. Currently, this is still a time-consuming task, spending valuable human resources. In this work, a novel fast methodology for automatic LV tracking is proposed based on localized anatomically constrained affine optical flow. This novel method can be combined to previously proposed segmentation frameworks or manually delineated surfaces at an initial frame to obtain fully delineated datasets and, thus, assess both global and regional myocardial function. Its feasibility and accuracy were investigated in 3 distinct public databases, namely in realistically simulated 3D ultrasound, clinical 3D echocardiography, and clinical cine cardiac magnetic resonance images. The method showed accurate tracking results in all databases, proving its applicability and accuracy for myocardial function assessment. Moreover, when combined to previous state-of-the-art segmentation frameworks, it outperformed previous tracking strategies in both 3D ultrasound and cardiac magnetic resonance data, automatically computing relevant cardiac indices with smaller biases and narrower limits of agreement compared to reference indices. Simultaneously, the proposed localized tracking method showed to be suitable for online processing, even for 3D motion assessment. Importantly, although here evaluated for LV tracking only, this novel methodology is applicable for tracking of other target structures with minimal adaptations.
Collapse
Affiliation(s)
- Sandro Queirós
- ICVS/3B's-PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Lab on Cardiovascular Imaging and Dynamics, Dept. of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - João L Vilaça
- ICVS/3B's-PT Government Associate Laboratory, Braga/Guimarães, Portugal
- DIGARC-Polytechnic Institute of Cávado and Ave (IPCA), Barcelos, Portugal
| | - Pedro Morais
- ICVS/3B's-PT Government Associate Laboratory, Braga/Guimarães, Portugal
- Lab on Cardiovascular Imaging and Dynamics, Dept. of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
- INEGI, Faculty of Engineering, University of Porto, Porto, Portugal
| | - Jaime C Fonseca
- Algoritmi Center, School of Engineering, University of Minho, Guimarães, Portugal
| | - Jan D'hooge
- Lab on Cardiovascular Imaging and Dynamics, Dept. of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
| | - Daniel Barbosa
- ICVS/3B's-PT Government Associate Laboratory, Braga/Guimarães, Portugal
| |
Collapse
|
43
|
Pedrosa J, Queiros S, Bernard O, Engvall J, Edvardsen T, Nagel E, D'hooge J. Fast and Fully Automatic Left Ventricular Segmentation and Tracking in Echocardiography Using Shape-Based B-Spline Explicit Active Surfaces. IEEE TRANSACTIONS ON MEDICAL IMAGING 2017; 36:2287-2296. [PMID: 28783626 DOI: 10.1109/tmi.2017.2734959] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Cardiac volume/function assessment remains a critical step in daily cardiology, and 3-D ultrasound plays an increasingly important role. Fully automatic left ventricular segmentation is, however, a challenging task due to the artifacts and low contrast-to-noise ratio of ultrasound imaging. In this paper, a fast and fully automatic framework for the full-cycle endocardial left ventricle segmentation is proposed. This approach couples the advantages of the B-spline explicit active surfaces framework, a purely image information approach, to those of statistical shape models to give prior information about the expected shape for an accurate segmentation. The segmentation is propagated throughout the heart cycle using a localized anatomical affine optical flow. It is shown that this approach not only outperforms other state-of-the-art methods in terms of distance metrics with a mean average distances of 1.81±0.59 and 1.98±0.66 mm at end-diastole and end-systole, respectively, but is computationally efficient (in average 11 s per 4-D image) and fully automatic.
Collapse
|
44
|
Papachristidis A, Galli E, Geleijnse ML, Heyde B, Alessandrini M, Barbosa D, Papitsas M, Pagnano G, Theodoropoulos KC, Zidros S, Donal E, Monaghan MJ, Bernard O, D'hooge J, Bosch JG. Standardized Delineation of Endocardial Boundaries in Three-Dimensional Left Ventricular Echocardiograms. J Am Soc Echocardiogr 2017; 30:1059-1069. [DOI: 10.1016/j.echo.2017.06.027] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2016] [Indexed: 01/30/2023]
|
45
|
Bernier M, Jodoin PM, Humbert O, Lalande A. Graph cut-based method for segmenting the left ventricle from MRI or echocardiographic images. Comput Med Imaging Graph 2017; 58:1-12. [DOI: 10.1016/j.compmedimag.2017.03.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2016] [Revised: 10/06/2016] [Accepted: 03/28/2017] [Indexed: 02/06/2023]
|
46
|
Spitzer E, Ren B, Zijlstra F, Mieghem NMV, Geleijnse ML. The Role of Automated 3D Echocardiography for Left Ventricular Ejection Fraction Assessment. Card Fail Rev 2017; 3:97-101. [PMID: 29387460 DOI: 10.15420/cfr.2017:14.1] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
Ejection fraction is one of the most powerful determinants of prognosis and is a crucial parameter for the determination of cardiovascular therapies in conditions such as heart failure, valvular conditions and ischaemic heart disease. Among echocardiographic methods, 3D echocardiography has been attributed as the preferred one for its assessment, given an increased accuracy and reproducibility. Full-volume multi-beat acquisitions are prone to stitching artefacts due to arrhythmias and require prolonged breath holds. Single-beat acquisitions exhibit a lower temporal resolution, but address the limitations of multi-beat acquisitions. If not fully automated, 3D echocardiography remains time-consuming and resource-intensive, with suboptimal observer variability, preventing its implementation in routine practice. Further developments in hardware and software, including fully automated knowledge-based algorithms for left ventricular quantification, may bring 3D echocardiography to a definite turning point.
Collapse
Affiliation(s)
- Ernest Spitzer
- Cardiology, Thoraxcenter, Erasmus University Medical Center,Rotterdam, the Netherlands.,Cardialysis, Clinical Trial Management & Core Laboratories,Rotterdam, the Netherlands
| | - Ben Ren
- Cardiology, Thoraxcenter, Erasmus University Medical Center,Rotterdam, the Netherlands.,Cardialysis, Clinical Trial Management & Core Laboratories,Rotterdam, the Netherlands
| | - Felix Zijlstra
- Cardiology, Thoraxcenter, Erasmus University Medical Center,Rotterdam, the Netherlands
| | - Nicolas M Van Mieghem
- Cardiology, Thoraxcenter, Erasmus University Medical Center,Rotterdam, the Netherlands
| | - Marcel L Geleijnse
- Cardiology, Thoraxcenter, Erasmus University Medical Center,Rotterdam, the Netherlands
| |
Collapse
|
47
|
Queiros S, Papachristidis A, Morais P, Theodoropoulos KC, Fonseca JC, Monaghan MJ, Vilaca JL, Dhooge J. Fully Automatic 3-D-TEE Segmentation for the Planning of Transcatheter Aortic Valve Implantation. IEEE Trans Biomed Eng 2016; 64:1711-1720. [PMID: 28113205 DOI: 10.1109/tbme.2016.2617401] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
A novel fully automatic framework for aortic valve (AV) trunk segmentation in three-dimensional (3-D) transesophageal echocardiography (TEE) datasets is proposed. The methodology combines a previously presented semiautomatic segmentation strategy by using shape-based B-spline Explicit Active Surfaces with two novel algorithms to automate the quantification of relevant AV measures. The first combines a fast rotation-invariant 3-D generalized Hough transform with a vessel-like dark tube detector to initialize the segmentation. After segmenting the AV wall, the second algorithm focuses on aligning this surface with the reference ones in order to estimate the short-axis (SAx) planes (at the left ventricular outflow tract, annulus, sinuses of Valsalva, and sinotubular junction) in which to perform the measurements. The framework has been tested in 20 3-D-TEE datasets with both stenotic and nonstenotic AVs. The initialization algorithm presented a median error of around 3 mm for the AV axis endpoints, with an overall feasibility of 90%. In its turn, the SAx detection algorithm showed to be highly reproducible, with indistinguishable results compared with the variability found between the experts' defined planes. Automatically extracted measures at the four levels showed a good agreement with the experts' ones, with limits of agreement similar to the interobserver variability. Moreover, a validation set of 20 additional stenotic AV datasets corroborated the method's applicability and accuracy. The proposed approach mitigates the variability associated with the manual quantification while significantly reducing the required analysis time (12 s versus 5 to 10 min), which shows its appeal for automatic dimensioning of the AV morphology in 3-D-TEE for the planning of transcatheter AV implantation.
Collapse
|
48
|
Queiros S, Papachristidis A, Barbosa D, Theodoropoulos KC, Fonseca JC, Monaghan MJ, Vilaca JL, D'hooge J. Aortic Valve Tract Segmentation From 3D-TEE Using Shape-Based B-Spline Explicit Active Surfaces. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:2015-2025. [PMID: 27008664 DOI: 10.1109/tmi.2016.2544199] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
A novel semi-automatic algorithm for aortic valve (AV) wall segmentation is presented for 3D transesophageal echocardiography (TEE) datasets. The proposed methodology uses a 3D cylindrical formulation of the B-spline Explicit Active Surfaces (BEAS) framework in a dual-stage energy evolution process, comprising a threshold-based and a localized region-based stage. Hereto, intensity and shape-based features are combined to accurately delineate the AV wall from the ascending aorta (AA) to the left ventricular outflow tract (LVOT). Shape-prior information is included using a profile-based statistical shape model (SSM), and embedded in BEAS through two novel regularization terms: one confining the segmented AV profiles to shapes seen in the SSM (hard regularization) and another penalizing according to the profile's degree of likelihood (soft regularization). The proposed energy functional takes thus advantage of the intensity data in regions with strong image content, while complementing it with shape knowledge in regions with nearly absent image data. The proposed algorithm has been validated in 20 3D-TEE datasets with both stenotic and non-stenotic valves. It was shown to be accurate, robust and computationally efficient, taking less than 1 second to segment the AV wall from the AA to the LVOT with an average accuracy of 0.78 mm. Semi-automatically extracted measurements at four relevant anatomical levels (LVOT, aortic annulus, sinuses of Valsalva and sinotubular junction) showed an excellent agreement with experts' ones, with a higher reproducibility than manually-extracted measures.
Collapse
|
49
|
Alessandrini M, Heyde B, Queiros S, Cygan S, Zontak M, Somphone O, Bernard O, Sermesant M, Delingette H, Barbosa D, De Craene M, ODonnell M, Dhooge J. Detailed Evaluation of Five 3D Speckle Tracking Algorithms Using Synthetic Echocardiographic Recordings. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1915-1926. [PMID: 26960220 DOI: 10.1109/tmi.2016.2537848] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
A plethora of techniques for cardiac deformation imaging with 3D ultrasound, typically referred to as 3D speckle tracking techniques, are available from academia and industry. Although the benefits of single methods over alternative ones have been reported in separate publications, the intrinsic differences in the data and definitions used makes it hard to compare the relative performance of different solutions. To address this issue, we have recently proposed a framework to simulate realistic 3D echocardiographic recordings and used it to generate a common set of ground-truth data for 3D speckle tracking algorithms, which was made available online. The aim of this study was therefore to use the newly developed database to contrast non-commercial speckle tracking solutions from research groups with leading expertise in the field. The five techniques involved cover the most representative families of existing approaches, namely block-matching, radio-frequency tracking, optical flow and elastic image registration. The techniques were contrasted in terms of tracking and strain accuracy. The feasibility of the obtained strain measurements to diagnose pathology was also tested for ischemia and dyssynchrony.
Collapse
|