1
|
Wang Y, Sun Z, Liu Z, Lu J, Zhang N. A Motion-Aware DNN Model with Edge Focus Loss and Quality Control for Short-Axis Left Ventricle Segmentation of Cine MR Sequences. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-023-00942-6. [PMID: 38366295 DOI: 10.1007/s10278-023-00942-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 09/27/2023] [Accepted: 09/28/2023] [Indexed: 02/18/2024]
Abstract
Accurate segmentation of the left ventricle myocardium is the key step of automatic assessment of cardiac function. However, the current methods mainly focus on the end-diastolic and the end-systolic frames in cine MR sequences and lack the attention to myocardial motion in the cardiac cycle. Additionally, due to the lack of fine segmentation tools, the simplified approach, excluding papillary muscles and trabeculae from myocardium, is applied in clinical practice. To solve these problems, we propose a motion-aware DNN model with edge focus loss and quality control in this paper. Specifically, the bidirectional ConvLSTM layer and a new motion attention layer are proposed to encode motion-aware feature maps, and an edge focus loss function is proposed to train the model to generate the fine segmentation results. Additionally, a quality control method is proposed to filter out the abnormal segmentations before subsequent analyses. Compared with state-of-the-art segmentation models on the public dataset and the in-house dataset, the proposed method has obtained high segmentation accuracy. On the 17-segment model, the proposed method has obtained the highest Pearson correlation coefficient at 14 of 17 segments, and the mean PCC of 85%. The experimental results highlight the segmentation accuracy of the proposed method as well as its availability to substitute for the manually annotated boundaries for the automatic assessment of cardiac function.
Collapse
Affiliation(s)
- Yu Wang
- School of Biomedical Engineering, Fengtai District, Capital Medical University, 10 Xitoutiao, YouanmenwaiBeijing, 100069, China
- Beijing Key Laboratory of Fundamental Research On Biomechanics in Clinical Application, Fengtai District, Capital Medical University, 10 Xitoutiao, YouanmenwaiBeijing, 100069, China
| | - Zheng Sun
- School of Biomedical Engineering, Fengtai District, Capital Medical University, 10 Xitoutiao, YouanmenwaiBeijing, 100069, China
- Beijing Key Laboratory of Fundamental Research On Biomechanics in Clinical Application, Fengtai District, Capital Medical University, 10 Xitoutiao, YouanmenwaiBeijing, 100069, China
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, 45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Zhi Liu
- Department of Cardiology, Xuanwu Hospital, Capital Medical University, 45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Jie Lu
- Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, 45 Changchun Street, Xicheng District, Beijing, 100053, China.
| | - Nan Zhang
- School of Biomedical Engineering, Fengtai District, Capital Medical University, 10 Xitoutiao, YouanmenwaiBeijing, 100069, China.
- Beijing Key Laboratory of Fundamental Research On Biomechanics in Clinical Application, Fengtai District, Capital Medical University, 10 Xitoutiao, YouanmenwaiBeijing, 100069, China.
| |
Collapse
|
2
|
Li D, Peng Y, Sun J, Guo Y. A task-unified network with transformer and spatial-temporal convolution for left ventricular quantification. Sci Rep 2023; 13:13529. [PMID: 37598235 PMCID: PMC10439898 DOI: 10.1038/s41598-023-40841-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 08/17/2023] [Indexed: 08/21/2023] Open
Abstract
Quantification of the cardiac function is vital for diagnosing and curing the cardiovascular diseases. Left ventricular function measurement is the most commonly used measure to evaluate the function of cardiac in clinical practice, how to improve the accuracy of left ventricular quantitative assessment results has always been the subject of research by medical researchers. Although considerable efforts have been put forward to measure the left ventricle (LV) automatically using deep learning methods, the accurate quantification is yet a challenge work as a result of the changeable anatomy structure of heart in the systolic diastolic cycle. Besides, most methods used direct regression method which lacks of visual based analysis. In this work, a deep learning segmentation and regression task-unified network with transformer and spatial-temporal convolution is proposed to segment and quantify the LV simultaneously. The segmentation module leverages a U-Net like 3D Transformer model to predict the contour of three anatomy structures, while the regression module learns spatial-temporal representations from the original images and the reconstruct feature map from segmentation path to estimate the finally desired quantification metrics. Furthermore, we employ a joint task loss function to train the two module networks. Our framework is evaluated on the MICCAI 2017 Left Ventricle Full Quantification Challenge dataset. The results of experiments demonstrate the effectiveness of our framework, which achieves competitive cardiac quantification metric results and at the same time produces visualized segmentation results that are conducive to later analysis.
Collapse
Affiliation(s)
- Dapeng Li
- Shandong University of Science and Technology, Qingdao, China
| | - Yanjun Peng
- Shandong University of Science and Technology, Qingdao, China.
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Qingdao, China.
| | - Jindong Sun
- Shandong University of Science and Technology, Qingdao, China
| | - Yanfei Guo
- Shandong University of Science and Technology, Qingdao, China
| |
Collapse
|
3
|
Wei H, Ma J, Zhou Y, Xue W, Ni D. Co-learning of appearance and shape for precise ejection fraction estimation from echocardiographic sequences. Med Image Anal 2023; 84:102686. [PMID: 36455332 DOI: 10.1016/j.media.2022.102686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 10/31/2022] [Accepted: 11/07/2022] [Indexed: 11/17/2022]
Abstract
Accurate estimation of ejection fraction (EF) from echocardiography is of great importance for evaluation of cardiac function. It is usually obtained by the Simpson's bi-plane method based on the segmentation of the left ventricle (LV) in two keyframes. However, obtaining accurate EF estimation from echocardiography is challenging due to (1) noisy appearance in ultrasound images, (2) temporal dynamic movement of myocardium, (3) sparse annotation of the full sequence, and (4) potential quality degradation during scanning. In this paper, we propose a multi-task semi-supervised framework, which is denoted as MCLAS, for precise EF estimation from echocardiographic sequences of two cardiac views. Specifically, we first propose a co-learning mechanism to explore the mutual benefits of cardiac segmentation and myocardium tracking iteratively on appearance level and shape level, therefore alleviating the noisy appearance and enforcing the temporal consistency of the segmentation results. This temporal consistency, as shown in our work, is critical for precise EF estimation. Then we propose two auxiliary tasks for the encoder, (1) view classification to help extract the discriminative features of each view, and automatize the whole pipeline of EF estimation in clinical practice, and (2) EF regression to help regularize the spatiotemporal embedding of the echocardiographic sequence. Both two auxiliary tasks can improve the segmentation-based EF prediction, especially for sequences of poor quality. Our method is capable of automating the whole pipeline of EF estimation, from view identification, cardiac structures segmentation to EF calculation. The effectiveness of our method is validated in aspects of segmentation, tracking, consistency analysis, and clinical parameters estimation. When compared with existing methods, our method shows obvious superiority for LV volumes on ED and ES phases, and EF estimation, with Pearson correlation of 0.975, 0.983 and 0.946, respectively. This is a significant improvement for echocardiography-based EF estimation and improves the potential of automated EF estimation in clinical practice. Besides, our method can obtain accurate and temporal-consistent segmentation for the in-between frames, which enables it for cardiac dynamic function evaluation.
Collapse
Affiliation(s)
- Hongrong Wei
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, China
| | - Junqiang Ma
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, China
| | - Yongjin Zhou
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China
| | - Wufeng Xue
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, China.
| | - Dong Ni
- School of Biomedical Engineering, Health Science Center, Shenzhen University, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, China.
| |
Collapse
|
4
|
Qiao S, Pang S, Luo G, Sun Y, Yin W, Pan S, Lv Z. DPC-MSGATNet: dual-path chain multi-scale gated axial-transformer network for four-chamber view segmentation in fetal echocardiography. COMPLEX INTELL SYST 2023. [DOI: 10.1007/s40747-023-00968-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
AbstractEchocardiography is essential in evaluating fetal cardiac anatomical structures and functions when clinicians conduct early treatment and screening for congenital heart defects, a common and intricate fetal malformation. Nevertheless, the prenatal detection rate of fetal CHD remains low since the peculiarities of fetal cardiac structures and the variousness of fetal CHD. Precisely segmenting four cardiac chambers can assist clinicians in analyzing cardiac morphology and further facilitate CHD diagnosis. Hence, we design a dual-path chain multi-scale gated axial-transformer network (DPC-MSGATNet) that simultaneously models global dependencies and local visual cues for fetal ultrasound (US) four-chamber (FC) views and further accurately segments four chambers. Our DPC-MSGATNet includes a global and a local branch that simultaneously operates on an entire FC view and image patches to learn multi-scale representations. We design a plug-and-play module, Interactive dual-path chain gated axial-transformer (IDPCGAT), to enhance the interactions between global and local branches. In IDPCGAT, the multi-scale representations from the two branches can complement each other, capturing the same region’s salient features and suppressing feature responses to maintain only the activations associated with specific targets. Extensive experiments demonstrate that the DPC-MSGATNet exceeds seven state-of-the-art convolution- and transformer-based methods by a large margin in terms of F1 and IoU scores on our fetal FC view dataset, achieving a F1 score of 96.87$$\%$$
%
and an IoU score of 93.99$$\%$$
%
. The codes and datasets can be available at https://github.comQiaoSiBo/DPC-MSGATNet.
Collapse
|
5
|
Ding W, Li L, Zhuang X, Huang L. Cross-Modality Multi-Atlas Segmentation Using Deep Neural Networks. IEEE J Biomed Health Inform 2022; 26:3104-3115. [PMID: 35130178 DOI: 10.1109/jbhi.2022.3149114] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Multi-atlas segmentation (MAS) is a promising framework for medical image segmentation. Generally, MAS methods register multiple atlases, i.e., medical images with corresponding labels, to a target image; and the transformed atlas labels can be combined to generate target segmentation via label fusion schemes. Many conventional MAS methods employed the atlases from the same modality as the target image. However, the number of atlases with the same modality may be limited or even missing in many clinical applications. Besides, conventional MAS methods suffer from the computational burden of registration or label fusion procedures. In this work, we design a novel cross-modality MAS framework, which uses available atlases from a certain modality to segment a target image from another modality. To boost the computational efficiency of the framework, both the image registration and label fusion are achieved by well-designed deep neural networks. For the atlas-to-target image registration, we propose a bi-directional registration network (BiRegNet), which can efficiently align images from different modalities. For the label fusion, we design a similarity estimation network (SimNet), which estimates the fusion weight of each atlas by measuring its similarity to the target image. SimNet can learn multi-scale information for similarity estimation to improve the performance of label fusion. The proposed framework was evaluated by the left ventricle and liver segmentation tasks on the MM-WHS and CHAOS datasets, respectively. Results have shown that the framework is effective for cross-modality MAS in both registration and label fusion. The code will be released publicly on https://github.com/NanYoMy/cmmas once the manuscript is accepted.
Collapse
|
6
|
Decuyper M, Maebe J, Van Holen R, Vandenberghe S. Artificial intelligence with deep learning in nuclear medicine and radiology. EJNMMI Phys 2021; 8:81. [PMID: 34897550 PMCID: PMC8665861 DOI: 10.1186/s40658-021-00426-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2021] [Accepted: 11/19/2021] [Indexed: 12/19/2022] Open
Abstract
The use of deep learning in medical imaging has increased rapidly over the past few years, finding applications throughout the entire radiology pipeline, from improved scanner performance to automatic disease detection and diagnosis. These advancements have resulted in a wide variety of deep learning approaches being developed, solving unique challenges for various imaging modalities. This paper provides a review on these developments from a technical point of view, categorizing the different methodologies and summarizing their implementation. We provide an introduction to the design of neural networks and their training procedure, after which we take an extended look at their uses in medical imaging. We cover the different sections of the radiology pipeline, highlighting some influential works and discussing the merits and limitations of deep learning approaches compared to other traditional methods. As such, this review is intended to provide a broad yet concise overview for the interested reader, facilitating adoption and interdisciplinary research of deep learning in the field of medical imaging.
Collapse
Affiliation(s)
- Milan Decuyper
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Jens Maebe
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Roel Van Holen
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| | - Stefaan Vandenberghe
- Department of Electronics and Information Systems, Ghent University, Ghent, Belgium
| |
Collapse
|