1
|
Taskén AA, Yu J, Berg EAR, Grenne B, Holte E, Dalen H, Stølen S, Lindseth F, Aakhus S, Kiss G. Automatic Detection and Tracking of Anatomical Landmarks in Transesophageal Echocardiography for Quantification of Left Ventricular Function. ULTRASOUND IN MEDICINE & BIOLOGY 2024; 50:797-804. [PMID: 38485534 DOI: 10.1016/j.ultrasmedbio.2024.01.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Revised: 01/10/2024] [Accepted: 01/25/2024] [Indexed: 05/01/2024]
Abstract
OBJECTIVE Evaluation of left ventricular (LV) function in critical care patients is useful for guidance of therapy and early detection of LV dysfunction, but the tools currently available are too time-consuming. To resolve this issue, we previously proposed a method for the continuous and automatic quantification of global LV function in critical care patients based on the detection and tracking of anatomical landmarks on transesophageal heart ultrasound. In the present study, our aim was to improve the performance of mitral annulus detection in transesophageal echocardiography (TEE). METHODS We investigated several state-of-the-art networks for both the detection and tracking of the mitral annulus in TEE. We integrated the networks into a pipeline for automatic assessment of LV function through estimation of the mitral annular plane systolic excursion (MAPSE), called autoMAPSE. TEE recordings from a total of 245 patients were collected from St. Olav's University Hospital and used to train and test the respective networks. We evaluated the agreement between autoMAPSE estimates and manual references annotated by expert echocardiographers in 30 Echolab patients and 50 critical care patients. Furthermore, we proposed a prototype of autoMAPSE for clinical integration and tested it in critical care patients in the intensive care unit. RESULTS Compared with manual references, we achieved a mean difference of 0.8 (95% limits of agreement: -2.9 to 4.7) mm in Echolab patients, with a feasibility of 85.7%. In critical care patients, we reached a mean difference of 0.6 (95% limits of agreement: -2.3 to 3.5) mm and a feasibility of 88.1%. The clinical prototype of autoMAPSE achieved real-time performance. CONCLUSION Automatic quantification of LV function had high feasibility in clinical settings. The agreement with manual references was comparable to inter-observer variability of clinical experts.
Collapse
Affiliation(s)
- Anders Austlid Taskén
- Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway.
| | - Jinyang Yu
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Anesthesia and Intensive Care, St. Olav's University Hospital, Trondheim, Norway
| | - Erik Andreas Rye Berg
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's University Hospital, Trondheim, Norway
| | - Bjørnar Grenne
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's University Hospital, Trondheim, Norway
| | - Espen Holte
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's University Hospital, Trondheim, Norway
| | - Håvard Dalen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's University Hospital, Trondheim, Norway; Levanger Hospital, Nord-Trøndelag Hospital Trust, Levanger, Norway
| | - Stian Stølen
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's University Hospital, Trondheim, Norway
| | - Frank Lindseth
- Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| | - Svend Aakhus
- Department of Circulation and Medical Imaging, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway; Clinic of Cardiology, St. Olav's University Hospital, Trondheim, Norway
| | - Gabriel Kiss
- Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
2
|
Lu J, Millioz F, Varray F, Poree J, Provost J, Bernard O, Garcia D, Friboulet D. Ultrafast Cardiac Imaging Using Deep Learning for Speckle-Tracking Echocardiography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1761-1772. [PMID: 37862280 DOI: 10.1109/tuffc.2023.3326377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2023]
Abstract
High-quality ultrafast ultrasound imaging is based on coherent compounding from multiple transmissions of plane waves (PW) or diverging waves (DW). However, compounding results in reduced frame rate, as well as destructive interferences from high-velocity tissue motion if motion compensation (MoCo) is not considered. While many studies have recently shown the interest of deep learning for the reconstruction of high-quality static images from PW or DW, its ability to achieve such performance while maintaining the capability of tracking cardiac motion has yet to be assessed. In this article, we addressed such issue by deploying a complex-weighted convolutional neural network (CNN) for image reconstruction and a state-of-the-art speckle-tracking method. The evaluation of this approach was first performed by designing an adapted simulation framework, which provides specific reference data, i.e., high-quality, motion artifact-free cardiac images. The obtained results showed that, while using only three DWs as input, the CNN-based approach yielded an image quality and a motion accuracy equivalent to those obtained by compounding 31 DWs free of motion artifacts. The performance was then further evaluated on nonsimulated, experimental in vitro data, using a spinning disk phantom. This experiment demonstrated that our approach yielded high-quality image reconstruction and motion estimation, under a large range of velocities and outperforms a state-of-the-art MoCo-based approach at high velocities. Our method was finally assessed on in vivo datasets and showed consistent improvement in image quality and motion estimation compared to standard compounding. This demonstrates the feasibility and effectiveness of deep learning reconstruction for ultrafast speckle-tracking echocardiography.
Collapse
|
3
|
Tehrani AKZ, Ashikuzzaman M, Rivaz H. Lateral Strain Imaging Using Self-Supervised and Physically Inspired Constraints in Unsupervised Regularized Elastography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1462-1471. [PMID: 37015465 DOI: 10.1109/tmi.2022.3230635] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Convolutional Neural Networks (CNN) have shown promising results for displacement estimation in UltraSound Elastography (USE). Many modifications have been proposed to improve the displacement estimation of CNNs for USE in the axial direction. However, the lateral strain, which is essential in several downstream tasks such as the inverse problem of elasticity imaging, remains a challenge. The lateral strain estimation is complicated since the motion and the sampling frequency in this direction are substantially lower than the axial one, and a lack of carrier signal in this direction. In computer vision applications, the axial and the lateral motions are independent. In contrast, the tissue motion pattern in USE is governed by laws of physics which link the axial and lateral displacements. In this paper, inspired by Hooke's law, we, first propose Physically Inspired ConsTraint for Unsupervised Regularized Elastography (PICTURE), where we impose a constraint on the Effective Poisson's ratio (EPR) to improve the lateral strain estimation. In the next step, we propose self-supervised PICTURE (sPICTURE) to further enhance the strain image estimation. Extensive experiments on simulation, experimental phantom and in vivo data demonstrate that the proposed methods estimate accurate axial and lateral strain maps.
Collapse
|
4
|
Attique D, Wang H, Wang P. Fog-Assisted Deep-Learning-Empowered Intrusion Detection System for RPL-Based Resource-Constrained Smart Industries. SENSORS (BASEL, SWITZERLAND) 2022; 22:9416. [PMID: 36502115 PMCID: PMC9735641 DOI: 10.3390/s22239416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 11/16/2022] [Accepted: 11/22/2022] [Indexed: 06/17/2023]
Abstract
The Internet of Things (IoT) is a prominent and advanced network communication technology that has familiarized the world with smart industries. The conveniently acquirable nature of IoT makes it susceptible to a diversified range of potential security threats. The literature has brought forth a plethora of solutions for ensuring secure communications in IoT-based smart industries. However, resource-constrained sectors still demand significant attention. We have proposed a fog-assisted deep learning (DL)-empowered intrusion detection system (IDS) for resource-constrained smart industries. The proposed Cuda-deep neural network gated recurrent unit (Cu-DNNGRU) framework was trained on the N-BaIoT dataset and was evaluated on judicious performance metrics, including accuracy, precision, recall, and F1-score. Additionally, the Cu-DNNGRU was empirically investigated alongside state-of-the-art classifiers, including Cu-LSTMDNN, Cu-BLSTM, and Cu-GRU. An extensive performance comparison was also undertaken among the proposed IDS and some outstanding solutions from the literature. The simulation results showed ample strength with respect to the validation of the proposed framework. The proposed Cu-DNNGRU achieved 99.39% accuracy, 99.09% precision, 98.89% recall, and an F1-score of 99.21%. In the performance comparison, the values were substantially higher than those of the benchmarked schemes, as well as competitive security solutions from the literature.
Collapse
Affiliation(s)
- Danish Attique
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Hao Wang
- Department of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Ping Wang
- Department of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
5
|
Zhang Z, Zhu Y, Liu M, Zhang Z, Zhao Y, Yang X, Xie M, Zhang L. Artificial Intelligence-Enhanced Echocardiography for Systolic Function Assessment. J Clin Med 2022; 11:jcm11102893. [PMID: 35629019 PMCID: PMC9143561 DOI: 10.3390/jcm11102893] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 05/06/2022] [Accepted: 05/18/2022] [Indexed: 11/16/2022] Open
Abstract
The accurate assessment of left ventricular systolic function is crucial in the diagnosis and treatment of cardiovascular diseases. Left ventricular ejection fraction (LVEF) and global longitudinal strain (GLS) are the most critical indexes of cardiac systolic function. Echocardiography has become the mainstay of cardiac imaging for measuring LVEF and GLS because it is non-invasive, radiation-free, and allows for bedside operation and real-time processing. However, the human assessment of cardiac function depends on the sonographer’s experience, and despite their years of training, inter-observer variability exists. In addition, GLS requires post-processing, which is time consuming and shows variability across different devices. Researchers have turned to artificial intelligence (AI) to address these challenges. The powerful learning capabilities of AI enable feature extraction, which helps to achieve accurate identification of cardiac structures and reliable estimation of the ventricular volume and myocardial motion. Hence, the automatic output of systolic function indexes can be achieved based on echocardiographic images. This review attempts to thoroughly explain the latest progress of AI in assessing left ventricular systolic function and differential diagnosis of heart diseases by echocardiography and discusses the challenges and promises of this new field.
Collapse
Affiliation(s)
- Zisang Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (Z.Z.); (Y.Z.); (M.L.); (Z.Z.); (Y.Z.)
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Ye Zhu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (Z.Z.); (Y.Z.); (M.L.); (Z.Z.); (Y.Z.)
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Manwei Liu
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (Z.Z.); (Y.Z.); (M.L.); (Z.Z.); (Y.Z.)
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Ziming Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (Z.Z.); (Y.Z.); (M.L.); (Z.Z.); (Y.Z.)
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Yang Zhao
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (Z.Z.); (Y.Z.); (M.L.); (Z.Z.); (Y.Z.)
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Xin Yang
- Media and Communication Lab (MC Lab), Electronics and Information Engineering Department, Huazhong University of Science and Technology, Wuhan 430022, China;
| | - Mingxing Xie
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (Z.Z.); (Y.Z.); (M.L.); (Z.Z.); (Y.Z.)
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
- Correspondence: (M.X.); (L.Z.)
| | - Li Zhang
- Department of Ultrasound Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China; (Z.Z.); (Y.Z.); (M.L.); (Z.Z.); (Y.Z.)
- Clinical Research Center for Medical Imaging in Hubei Province, Wuhan 430022, China
- Hubei Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
- Correspondence: (M.X.); (L.Z.)
| |
Collapse
|