1
|
Zha SZ, Rogstadkjernet M, Klæboe LG, Skulstad H, Singstad BJ, Gilbert A, Edvardsen T, Samset E, Brekke PH. Deep learning for automated left ventricular outflow tract diameter measurements in 2D echocardiography. Cardiovasc Ultrasound 2023; 21:19. [PMID: 37833731 PMCID: PMC10571406 DOI: 10.1186/s12947-023-00317-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 10/02/2023] [Indexed: 10/15/2023] Open
Abstract
BACKGROUND Measurement of the left ventricular outflow tract diameter (LVOTd) in echocardiography is a common source of error when used to calculate the stroke volume. The aim of this study is to assess whether a deep learning (DL) model, trained on a clinical echocardiographic dataset, can perform automatic LVOTd measurements on par with expert cardiologists. METHODS Data consisted of 649 consecutive transthoracic echocardiographic examinations of patients with coronary artery disease admitted to a university hospital. 1304 LVOTd measurements in the parasternal long axis (PLAX) and zoomed parasternal long axis views (ZPLAX) were collected, with each patient having 1-6 measurements per examination. Data quality control was performed by an expert cardiologist, and spatial geometry data was preserved for each LVOTd measurement to convert DL predictions into metric units. A convolutional neural network based on the U-Net was used as the DL model. RESULTS The mean absolute LVOTd error was 1.04 (95% confidence interval [CI] 0.90-1.19) mm for DL predictions on the test set. The mean relative LVOTd errors across all data subgroups ranged from 3.8 to 5.1% for the test set. Generally, the DL model had superior performance on the ZPLAX view compared to the PLAX view. DL model precision for patients with repeated LVOTd measurements had a mean coefficient of variation of 2.2 (95% CI 1.6-2.7) %, which was comparable to the clinicians for the test set. CONCLUSION DL for automatic LVOTd measurements in PLAX and ZPLAX views is feasible when trained on a limited clinical dataset. While the DL predicted LVOTd measurements were within the expected range of clinical inter-observer variability, the robustness of the DL model requires validation on independent datasets. Future experiments using temporal information and anatomical constraints could improve valvular identification and reduce outliers, which are challenges that must be addressed before clinical utilization.
Collapse
Affiliation(s)
| | | | | | - Helge Skulstad
- University of Oslo, Oslo, Norway
- Oslo University Hospital, Rikshospitalet, Oslo, Norway
| | | | | | - Thor Edvardsen
- University of Oslo, Oslo, Norway
- Oslo University Hospital, Rikshospitalet, Oslo, Norway
| | - Eigil Samset
- University of Oslo, Oslo, Norway
- GE HealthCare, Oslo, Norway
| | | |
Collapse
|
2
|
Barkat L, Freiman M, Azhari H. Image Translation of Breast Ultrasound to Pseudo Anatomical Display by CycleGAN. Bioengineering (Basel) 2023; 10:bioengineering10030388. [PMID: 36978779 PMCID: PMC10045378 DOI: 10.3390/bioengineering10030388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 03/15/2023] [Accepted: 03/16/2023] [Indexed: 03/30/2023] Open
Abstract
Ultrasound imaging is cost effective, radiation-free, portable, and implemented routinely in clinical procedures. Nonetheless, image quality is characterized by a granulated appearance, a poor SNR, and speckle noise. Specific for breast tumors, the margins are commonly blurred and indistinct. Thus, there is a need for improving ultrasound image quality. We hypothesize that this can be achieved by translation into a more realistic display which mimics a pseudo anatomical cut through the tissue, using a cycle generative adversarial network (CycleGAN). In order to train CycleGAN for this translation, two datasets were used, "Breast Ultrasound Images" (BUSI) and a set of optical images of poultry breast tissues. The generated pseudo anatomical images provide improved visual discrimination of the lesions through clearer border definition and pronounced contrast. In order to evaluate the preservation of the anatomical features, the lesions in both datasets were segmented and compared. This comparison yielded median dice scores of 0.91 and 0.70; median center errors of 0.58% and 3.27%; and median area errors of 0.40% and 4.34% for the benign and malignancies, respectively. In conclusion, generated pseudo anatomical images provide a more intuitive display, enhance tissue anatomy, and preserve tumor geometry; and can potentially improve diagnoses and clinical outcomes.
Collapse
Affiliation(s)
- Lilach Barkat
- Biomedical Engineering Faculty, Technion-Israel Institute of Technology, Haifa 3200001, Israel
| | - Moti Freiman
- Biomedical Engineering Faculty, Technion-Israel Institute of Technology, Haifa 3200001, Israel
| | - Haim Azhari
- Biomedical Engineering Faculty, Technion-Israel Institute of Technology, Haifa 3200001, Israel
| |
Collapse
|
3
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
4
|
Ferraz S, Coimbra M, Pedrosa J. Assisted probe guidance in cardiac ultrasound: A review. Front Cardiovasc Med 2023; 10:1056055. [PMID: 36865885 PMCID: PMC9971589 DOI: 10.3389/fcvm.2023.1056055] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Accepted: 01/24/2023] [Indexed: 02/16/2023] Open
Abstract
Echocardiography is the most frequently used imaging modality in cardiology. However, its acquisition is affected by inter-observer variability and largely dependent on the operator's experience. In this context, artificial intelligence techniques could reduce these variabilities and provide a user independent system. In recent years, machine learning (ML) algorithms have been used in echocardiography to automate echocardiographic acquisition. This review focuses on the state-of-the-art studies that use ML to automate tasks regarding the acquisition of echocardiograms, including quality assessment (QA), recognition of cardiac views and assisted probe guidance during the scanning process. The results indicate that performance of automated acquisition was overall good, but most studies lack variability in their datasets. From our comprehensive review, we believe automated acquisition has the potential not only to improve accuracy of diagnosis, but also help novice operators build expertise and facilitate point of care healthcare in medically underserved areas.
Collapse
Affiliation(s)
- Sofia Ferraz
- Institute for Systems and Computer Engineering, Technology and Science INESC TEC, Porto, Portugal
- Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal
| | - Miguel Coimbra
- Institute for Systems and Computer Engineering, Technology and Science INESC TEC, Porto, Portugal
- Faculty of Sciences of the University of Porto (FCUP), Porto, Portugal
| | - João Pedrosa
- Institute for Systems and Computer Engineering, Technology and Science INESC TEC, Porto, Portugal
- Faculty of Engineering of the University of Porto (FEUP), Porto, Portugal
| |
Collapse
|
5
|
Shoaib MA, Chuah JH, Ali R, Hasikin K, Khalil A, Hum YC, Tee YK, Dhanalakshmi S, Lai KW. An Overview of Deep Learning Methods for Left Ventricle Segmentation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:4208231. [PMID: 36756163 PMCID: PMC9902166 DOI: 10.1155/2023/4208231] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 10/25/2022] [Accepted: 11/24/2022] [Indexed: 01/31/2023]
Abstract
Cardiac health diseases are one of the key causes of death around the globe. The number of heart patients has considerably increased during the pandemic. Therefore, it is crucial to assess and analyze the medical and cardiac images. Deep learning architectures, specifically convolutional neural networks have profoundly become the primary choice for the assessment of cardiac medical images. The left ventricle is a vital part of the cardiovascular system where the boundary and size perform a significant role in the evaluation of cardiac function. Due to automatic segmentation and good promising results, the left ventricle segmentation using deep learning has attracted a lot of attention. This article presents a critical review of deep learning methods used for the left ventricle segmentation from frequently used imaging modalities including magnetic resonance images, ultrasound, and computer tomography. This study also demonstrates the details of the network architecture, software, and hardware used for training along with publicly available cardiac image datasets and self-prepared dataset details incorporated. The summary of the evaluation matrices with results used by different researchers is also presented in this study. Finally, all this information is summarized and comprehended in order to assist the readers to understand the motivation and methodology of various deep learning models, as well as exploring potential solutions to future challenges in LV segmentation.
Collapse
Affiliation(s)
- Muhammad Ali Shoaib
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
- Faculty of Information and Communication Technology, BUITEMS, Quetta, Pakistan
| | - Joon Huang Chuah
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Raza Ali
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
- Faculty of Information and Communication Technology, BUITEMS, Quetta, Pakistan
| | - Khairunnisa Hasikin
- Department of Biomedical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| | - Azira Khalil
- Faculty of Science & Technology, Universiti Sains Islam Malaysia, Nilai 71800, Malaysia
| | - Yan Chai Hum
- Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia
| | - Yee Kai Tee
- Department of Mechatronics and Biomedical Engineering, Lee Kong Chian Faculty of Engineering and Science, Universiti Tunku Abdul Rahman, Malaysia
| | - Samiappan Dhanalakshmi
- Department of Electronics and Communication Engineering, SRM Institute of Science and Technology, Kattankulathur, India
| | - Khin Wee Lai
- Department of Electrical Engineering, Faculty of Engineering, Universiti Malaya, Kuala Lumpur, Malaysia
| |
Collapse
|
6
|
Belfilali H, Bousefsaf F, Messadi M. Left ventricle analysis in echocardiographic images using transfer learning. Phys Eng Sci Med 2022; 45:1123-1138. [PMID: 36131173 DOI: 10.1007/s13246-022-01179-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 09/13/2022] [Indexed: 12/15/2022]
Abstract
The segmentation of cardiac boundaries, specifically Left Ventricle (LV) segmentation in 2D echocardiographic images, is a critical step in LV segmentation and cardiac function assessment. These images are generally of poor quality and present low contrast, making daily clinical delineation difficult, time-consuming, and often inaccurate. Thus, it is necessary to design an intelligent automatic endocardium segmentation system. The present work aims to examine and assess the performance of some deep learning-based architectures such as U-Net1, U-Net2, LinkNet, Attention U-Net, and TransUNet using the public CAMUS (Cardiac Acquisitions for Multi-structure Ultrasound Segmentation) dataset. The adopted approach emphasizes the advantage of using transfer learning and resorting to pre-trained backbones in the encoder part of a segmentation network for echocardiographic image analysis. The experimental findings indicated that the proposed framework with the [Formula: see text]-[Formula: see text] is quite promising; it outperforms other more recent approaches with a Dice similarity coefficient of 93.30% and a Hausdorff Distance of 4.01 mm. In addition, a good agreement between the clinical indices calculated from the automatic segmentation and those calculated from the ground truth segmentation. For instance, the mean absolute errors for the left ventricular end-diastolic volume, end-systolic volume, and ejection fraction are equal to 7.9 ml, 5.4 ml, and 6.6%, respectively. These results are encouraging and point out additional perspectives for further improvement.
Collapse
Affiliation(s)
- Hafida Belfilali
- Laboratory of Biomedical Engineering, Faculty of technology, University of Tlemcen, 13000, Tlemcen, Algeria.
| | - Frédéric Bousefsaf
- Laboratoire de Conception, Optimisation et Modélisation des Systèmes, LCOMS EA 7306, Université de Lorraine, 57000, Metz, France.
| | - Mahammed Messadi
- Laboratory of Biomedical Engineering, Faculty of technology, University of Tlemcen, 13000, Tlemcen, Algeria
| |
Collapse
|
7
|
Moinuddin M, Khan S, Alsaggaf AU, Abdulaal MJ, Al-Saggaf UM, Ye JC. Medical ultrasound image speckle reduction and resolution enhancement using texture compensated multi-resolution convolution neural network. Front Physiol 2022; 13:961571. [DOI: 10.3389/fphys.2022.961571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2022] [Accepted: 10/19/2022] [Indexed: 11/16/2022] Open
Abstract
Ultrasound (US) imaging is a mature technology that has widespread applications especially in the healthcare sector. Despite its widespread use and popularity, it has an inherent disadvantage that ultrasound images are prone to speckle and other kinds of noise. The image quality in the low-cost ultrasound imaging systems is degraded due to the presence of such noise and low resolution of such ultrasound systems. Herein, we propose a method for image enhancement where, the overall quality of the US images is improved by simultaneous enhancement of US image resolution and noise suppression. To avoid over-smoothing and preserving structural/texture information, we devise texture compensation in our proposed method to retain the useful anatomical features. Moreover, we also utilize US image formation physics knowledge to generate augmentation datasets which can improve the training of our proposed method. Our experimental results showcase the performance of the proposed network as well as the effectiveness of the utilization of US physics knowledge to generate augmentation datasets.
Collapse
|
8
|
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. EYE AND VISION (LONDON, ENGLAND) 2022; 9:6. [PMID: 35109930 PMCID: PMC8808986 DOI: 10.1186/s40662-022-00277-3] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/11/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Recent advances in deep learning techniques have led to improved diagnostic abilities in ophthalmology. A generative adversarial network (GAN), which consists of two competing types of deep neural networks, including a generator and a discriminator, has demonstrated remarkable performance in image synthesis and image-to-image translation. The adoption of GAN for medical imaging is increasing for image generation and translation, but it is not familiar to researchers in the field of ophthalmology. In this work, we present a literature review on the application of GAN in ophthalmology image domains to discuss important contributions and to identify potential future research directions. METHODS We performed a survey on studies using GAN published before June 2021 only, and we introduced various applications of GAN in ophthalmology image domains. The search identified 48 peer-reviewed papers in the final review. The type of GAN used in the analysis, task, imaging domain, and the outcome were collected to verify the usefulness of the GAN. RESULTS In ophthalmology image domains, GAN can perform segmentation, data augmentation, denoising, domain transfer, super-resolution, post-intervention prediction, and feature extraction. GAN techniques have established an extension of datasets and modalities in ophthalmology. GAN has several limitations, such as mode collapse, spatial deformities, unintended changes, and the generation of high-frequency noises and artifacts of checkerboard patterns. CONCLUSIONS The use of GAN has benefited the various tasks in ophthalmology image domains. Based on our observations, the adoption of GAN in ophthalmology is still in a very early stage of clinical validation compared with deep learning classification techniques because several problems need to be overcome for practical use. However, the proper selection of the GAN technique and statistical modeling of ocular imaging will greatly improve the performance of each image analysis. Finally, this survey would enable researchers to access the appropriate GAN technique to maximize the potential of ophthalmology datasets for deep learning research.
Collapse
Affiliation(s)
- Aram You
- School of Architecture, Kumoh National Institute of Technology, Gumi, Gyeongbuk, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea.
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Namil-myeon, Cheongwon-gun, Cheongju, Chungcheongbuk-do, 363-849, South Korea.
| |
Collapse
|
9
|
Gilbert A, Marciniak M, Rodero C, Lamata P, Samset E, Mcleod K. Generating Synthetic Labeled Data From Existing Anatomical Models: An Example With Echocardiography Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2783-2794. [PMID: 33444134 PMCID: PMC8493532 DOI: 10.1109/tmi.2021.3051806] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/24/2020] [Revised: 01/03/2021] [Accepted: 01/11/2021] [Indexed: 06/12/2023]
Abstract
Deep learning can bring time savings and increased reproducibility to medical image analysis. However, acquiring training data is challenging due to the time-intensive nature of labeling and high inter-observer variability in annotations. Rather than labeling images, in this work we propose an alternative pipeline where images are generated from existing high-quality annotations using generative adversarial networks (GANs). Annotations are derived automatically from previously built anatomical models and are transformed into realistic synthetic ultrasound images with paired labels using a CycleGAN. We demonstrate the pipeline by generating synthetic 2D echocardiography images to compare with existing deep learning ultrasound segmentation datasets. A convolutional neural network is trained to segment the left ventricle and left atrium using only synthetic images. Networks trained with synthetic images were extensively tested on four different unseen datasets of real images with median Dice scores of 91, 90, 88, and 87 for left ventricle segmentation. These results match or are better than inter-observer results measured on real ultrasound datasets and are comparable to a network trained on a separate set of real images. Results demonstrate the images produced can effectively be used in place of real data for training. The proposed pipeline opens the door for automatic generation of training data for many tasks in medical imaging as the same process can be applied to other segmentation or landmark detection tasks in any modality. The source code and anatomical models are available to other researchers.1 1https://adgilbert.github.io/data-generation/.
Collapse
Affiliation(s)
- Andrew Gilbert
- GE Vingmed Ultrasound, GE Healthcare3183HortenNorway
- Department of InformaticsUniversity of Oslo0315OsloNorway
| | - Maciej Marciniak
- Biomedical Engineering DepartmentKing’s College LondonLondonWC2R 2LSU.K.
| | - Cristobal Rodero
- Biomedical Engineering DepartmentKing’s College LondonLondonWC2R 2LSU.K.
| | - Pablo Lamata
- Biomedical Engineering DepartmentKing’s College LondonLondonWC2R 2LSU.K.
| | - Eigil Samset
- GE Vingmed Ultrasound, GE Healthcare3183HortenNorway
- Department of InformaticsUniversity of Oslo0315OsloNorway
| | | |
Collapse
|
10
|
Kwan AC, Salto G, Cheng S, Ouyang D. Artificial Intelligence in Computer Vision: Cardiac MRI and Multimodality Imaging Segmentation. CURRENT CARDIOVASCULAR RISK REPORTS 2021; 15:18. [PMID: 35693045 PMCID: PMC9187294 DOI: 10.1007/s12170-021-00678-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/03/2021] [Indexed: 12/17/2022]
Abstract
Purpose of Review Anatomical segmentation has played a major role within clinical cardiology. Novel techniques through artificial intelligence-based computer vision have revolutionized this process through both automation and novel applications. This review discusses the history and clinical context of cardiac segmentation to provide a framework for a survey of recent manuscripts in artificial intelligence and cardiac segmentation. We aim to clarify for the reader the clinical question of "Why do we segment?" in order to understand the question of "Where is current research and where should be?". Recent Findings There has been increasing research in cardiac segmentation in recent years. Segmentation models are most frequently based on a U-Net structure. Multiple innovations have been added in terms of pre-processing or connection to analysis pipelines. Cardiac MRI is the most frequently segmented modality, which is due in part to the presence of publically-available, moderately sized, computer vision competition datasets. Further progress in data availability, model explanation, and clinical integration are being pursued. Summary The task of cardiac anatomical segmentation has experienced massive strides forward within the past five years due to convolutional neural networks. These advances provide a basis for streamlining image analysis, and a foundation for further analysis both by computer and human systems. While technical advances are clear, clinical benefit remains nascent. Novel approaches may improve measurement precision by decreasing inter-reader variability and appear to also have the potential for larger-reaching effects in the future within integrated analysis pipelines.
Collapse
Affiliation(s)
- Alan C Kwan
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA
| | - Gerran Salto
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA
- Division of Cardiovascular Medicine, Brigham and Women's Hospital, Boston, MA
- Framingham Heart Study, Framingham, MA
| | - Susan Cheng
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA
- Division of Cardiovascular Medicine, Brigham and Women's Hospital, Boston, MA
- Framingham Heart Study, Framingham, MA
| | - David Ouyang
- Department of Cardiology, Smidt Heart Institute, Cedars-Sinai Medical Center, Los Angeles, CA
| |
Collapse
|
11
|
Stewart JE, Goudie A, Mukherjee A, Dwivedi G. Artificial intelligence-enhanced echocardiography in the emergency department. Emerg Med Australas 2021; 33:1117-1120. [PMID: 34431225 DOI: 10.1111/1742-6723.13847] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2021] [Accepted: 08/02/2021] [Indexed: 01/26/2023]
Abstract
A focused cardiac ultrasound performed by an emergency physician is becoming part of the standard assessment of patients in a variety of clinical situations. The development of inexpensive, portable handheld devices promises to make point-of-care ultrasound even more accessible over the coming decades. Many of these handheld devices are beginning to integrate artificial intelligence (AI) for image analysis. The integration of AI into focused cardiac ultrasound will have a number of implications for emergency physicians. This perspective presents an overview of the current state of AI research in echocardiography relevant to the emergency physician, as well as the future possibilities, challenges and risks of this technology.
Collapse
Affiliation(s)
- Jonathon E Stewart
- Medical School, The University of Western Australia, Perth, Western Australia, Australia.,Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia
| | - Adrian Goudie
- Emergency Department, Fiona Stanley Hospital, Perth, Western Australia, Australia
| | - Ashes Mukherjee
- Medical School, The University of Western Australia, Perth, Western Australia, Australia.,Emergency Department, Armadale Health Service, Perth, Western Australia, Australia
| | - Girish Dwivedi
- Medical School, The University of Western Australia, Perth, Western Australia, Australia.,Department of Advanced Clinical and Translational Cardiovascular Imaging, Harry Perkins Institute of Medical Research, Perth, Western Australia, Australia.,Department of Cardiology, Fiona Stanley Hospital, Perth, Western Australia, Australia
| |
Collapse
|
12
|
Khan S, Huh J, Ye JC. Variational Formulation of Unsupervised Deep Learning for Ultrasound Image Artifact Removal. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2086-2100. [PMID: 33523809 DOI: 10.1109/tuffc.2021.3056197] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recently, deep learning approaches have been successfully used for ultrasound (US) image artifact removal. However, paired high-quality images for supervised training are difficult to obtain in many practical situations. Inspired by the recent theory of unsupervised learning using optimal transport driven CycleGAN (OT-CycleGAN), here, we investigate the applicability of unsupervised deep learning for US artifact removal problems without matched reference data. Two types of OT-CycleGAN approaches are employed: one with the partial knowledge of the image degradation physics and the other with the lack of such knowledge. Various US artifact removal problems are then addressed using the two types of OT-CycleGAN. Experimental results for various unsupervised US artifact removal tasks confirmed that our unsupervised learning method delivers results comparable to supervised learning in many practical applications.
Collapse
|
13
|
Zhang L, Portenier T, Goksel O. Learning ultrasound rendering from cross-sectional model slices for simulated training. Int J Comput Assist Radiol Surg 2021; 16:721-730. [PMID: 33834348 PMCID: PMC8134288 DOI: 10.1007/s11548-021-02349-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 03/09/2021] [Indexed: 11/25/2022]
Abstract
PURPOSE Given the high level of expertise required for navigation and interpretation of ultrasound images, computational simulations can facilitate the training of such skills in virtual reality. With ray-tracing based simulations, realistic ultrasound images can be generated. However, due to computational constraints for interactivity, image quality typically needs to be compromised. METHODS We propose herein to bypass any rendering and simulation process at interactive time, by conducting such simulations during a non-time-critical offline stage and then learning image translation from cross-sectional model slices to such simulated frames. We use a generative adversarial framework with a dedicated generator architecture and input feeding scheme, which both substantially improve image quality without increase in network parameters. Integral attenuation maps derived from cross-sectional model slices, texture-friendly strided convolutions, providing stochastic noise and input maps to intermediate layers in order to preserve locality are all shown herein to greatly facilitate such translation task. RESULTS Given several quality metrics, the proposed method with only tissue maps as input is shown to provide comparable or superior results to a state-of-the-art that uses additional images of low-quality ultrasound renderings. An extensive ablation study shows the need and benefits from the individual contributions utilized in this work, based on qualitative examples and quantitative ultrasound similarity metrics. To that end, a local histogram statistics based error metric is proposed and demonstrated for visualization of local dissimilarities between ultrasound images. CONCLUSION A deep-learning based direct transformation from interactive tissue slices to likeness of high quality renderings allow to obviate any complex rendering process in real-time, which could enable extremely realistic ultrasound simulations on consumer-hardware by moving the time-intensive processes to a one-time, offline, preprocessing data preparation stage that can be performed on dedicated high-end hardware.
Collapse
Affiliation(s)
- Lin Zhang
- Computer-assisted Applications in Medicine, ETH Zurich, Zürich, Switzerland.
| | - Tiziano Portenier
- Computer-assisted Applications in Medicine, ETH Zurich, Zürich, Switzerland
| | - Orcun Goksel
- Computer-assisted Applications in Medicine, ETH Zurich, Zürich, Switzerland
- Department of Information Technology, Uppsala University, Uppsala, Sweden
| |
Collapse
|
14
|
Golukhova EZ, Slivneva IV, Rybka MM, Mamalyga ML, Marapov DI, Klyuchnikov IV, Antonova DE, Dibin DA. [Right ventricular systolic dysfunction as a predictor of adverse outcome in patients with COVID-19]. ACTA ACUST UNITED AC 2020; 60:1303. [PMID: 33487146 DOI: 10.18087/cardio.2020.11.n1303] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 09/28/2020] [Accepted: 10/23/2020] [Indexed: 11/18/2022]
Abstract
Aim To analyze survival of patients with COVID-19 based on echocardiographic (EchoCG) criteria for evaluation of the right ventricular (RV) systolic function.Material and methods Data of patients were retrospectively evaluated at the Center for Medical Care of Patients with Coronavirus Infection. Among 142 primarily evaluated patients with documented COVID-19, 110 patients (men/women, 63/47; mean age, 62.3 ± 15.3 years) met inclusion/exclusion criteria. More than 30 EchoCG parameters were analyzed, and baseline data (comorbidities, oxygen saturation, laboratory data, complications, outcomes, etc.) were evaluated. ROC-analysis was used for evaluating the diagnostic significance of different EchoCG parameters for prediction of a specific outcome and its probability. Dependence of the overall survival of patients on different EchoCG parameters was analyzed with the Cox proportional hazards model. For assessing the predictive value of EchoCG parameters for patient stratification by risk of an adverse outcome, a predictive model was developed using the CHAID method.Results The in-hospital death rate of patients included into the study was 15.5 %, and the death rate for this period of in-hospital observation was 12 %. Based on the single-factor analysis of EchoCG parameters, a multifactor model was developed using the method of Cox regression. The model included two predictors for an unfavorable outcome, estimated pulmonary artery systolic pressure (EPASP) and maximal indexed right atrial volume (RAi), and a preventive factor, right ventricular global longitudinal strain (LS RV). Base risks for fatal outcome were determined with an account of the follow-up time. According to the obtained values, an increase in EPASP by 1 mm Hg was associated with increases in the risk of fatal outcome by 8.6 % and in the RA(i) volume by 1 ml/5.8 %. LS RV demonstrated an inverse correlation; a 1% increase in LS RV was associated with a 13.4% decrease in the risk for an unfavorable outcome. According to the ROC analysis, the most significant determinants of the outcome were the tricuspid annular plane systolic excursion (TAPSE) (AUC, 0.84 ± 0.06; cut-off, 18 mm) and EPASP (AUC, 0.86 ± 0.05; cut-off, 42 mm Hg). Evaluating the effects of different EchoCG predictors, that characterized the condition of the right heart, provided a classification tree. Six final decisions were determined in the model, two of which were assigned to the category of reduced risk for fatal outcome and four were assigned to the category of increased risk. Sensitivity of the classification tree model was 94.1 % and specificity was 89.2 %. Overall diagnostic significance was 90.0±2.9 %.Conclusion The presented models for statistical treatment of EchoCG parameters reflect the requirement for a comprehensive analysis of EchoCG parameters based on a combination of standard methods for EchoCG evaluation and current technologies of noninvasive visualization. According to the study results, the new EchoCG marker, LS RV, allows identifying the signs of right ventricular dysfunction (particularly in combination with pulmonary hemodynamic indexes), may enhance the early risk stratification in patients with COVID-19, and help making clinical decisions for patients with different acute cardiorespiratory diseases.
Collapse
Affiliation(s)
- E Z Golukhova
- «A.N. Bakulev National Medical Scientific Center for Cardiovascular Surgery», Moscow, Russia
| | - I V Slivneva
- «A.N. Bakulev National Medical Scientific Center for Cardiovascular Surgery», Moscow, Russia
| | - M M Rybka
- «A.N. Bakulev National Medical Scientific Center for Cardiovascular Surgery», Moscow, Russia
| | - M L Mamalyga
- «A.N. Bakulev National Medical Scientific Center for Cardiovascular Surgery», Moscow, Russia
| | - D I Marapov
- Kazan state medical Academy affiliate of the Russian medical Academy of continuing professional education, Kazan, Russia
| | - I V Klyuchnikov
- «A.N. Bakulev National Medical Scientific Center for Cardiovascular Surgery», Moscow, Russia
| | - D E Antonova
- «A.N. Bakulev National Medical Scientific Center for Cardiovascular Surgery», Moscow, Russia
| | - D A Dibin
- «A.N. Bakulev National Medical Scientific Center for Cardiovascular Surgery», Moscow, Russia
| |
Collapse
|
15
|
Minardi J, Marsh C, Sengupta P. Risk-Stratifying COVID-19 Patients the Right Way. JACC Cardiovasc Imaging 2020; 13:2300-2303. [PMID: 32739372 PMCID: PMC7250774 DOI: 10.1016/j.jcmg.2020.05.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 05/08/2020] [Accepted: 05/11/2020] [Indexed: 12/16/2022]
Affiliation(s)
- Joseph Minardi
- Department of Emergency Medicine and Medical Education, West Virginia University School of Medicine, Morgantown, West Virginia
| | - Clay Marsh
- Department of Medicine, West Virginia University School of Medicine, Morgantown, West Virginia; Section of Pulmonary and Critical Care Medicine, West Virginia University School of Medicine, Morgantown, West Virginia
| | - Partho Sengupta
- Department of Medicine, West Virginia University School of Medicine, Morgantown, West Virginia; Division of Cardiology, West Virginia University School of Medicine, Morgantown, West Virginia.
| |
Collapse
|
16
|
Yi J, Kang HK, Kwon JH, Kim KS, Park MH, Seong YK, Kim DW, Ahn B, Ha K, Lee J, Hah Z, Bang WC. Technology trends and applications of deep learning in ultrasonography: image quality enhancement, diagnostic support, and improving workflow efficiency. Ultrasonography 2020; 40:7-22. [PMID: 33152846 PMCID: PMC7758107 DOI: 10.14366/usg.20102] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Accepted: 09/14/2020] [Indexed: 12/12/2022] Open
Abstract
In this review of the most recent applications of deep learning to ultrasound imaging, the architectures of deep learning networks are briefly explained for the medical imaging applications of classification, detection, segmentation, and generation. Ultrasonography applications for image processing and diagnosis are then reviewed and summarized, along with some representative imaging studies of the breast, thyroid, heart, kidney, liver, and fetal head. Efforts towards workflow enhancement are also reviewed, with an emphasis on view recognition, scanning guide, image quality assessment, and quantification and measurement. Finally some future prospects are presented regarding image quality enhancement, diagnostic support, and improvements in workflow efficiency, along with remarks on hurdles, benefits, and necessary collaborations.
Collapse
Affiliation(s)
- Jonghyon Yi
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Ho Kyung Kang
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Jae-Hyun Kwon
- DR Imaging R&D Lab, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Kang-Sik Kim
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Moon Ho Park
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Yeong Kyeong Seong
- Ultrasound R&D Group, Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seongnam, Korea
| | - Dong Woo Kim
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Byungeun Ahn
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Kilsu Ha
- Product Strategy Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Jinyong Lee
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Zaegyoo Hah
- System R&D Group, Samsung Medison Co., Ltd., Seongnam, Korea
| | - Won-Chul Bang
- Health & Medical Equipment Business, Samsung Electronics Co., Ltd., Seoul, Korea.,Product Strategy Team, Samsung Medison Co., Ltd., Seoul, Korea
| |
Collapse
|