1
|
Zhang Z, Lei Z, Zhou M, Hasegawa H, Gao S. Complex-Valued Convolutional Gated Recurrent Neural Network for Ultrasound Beamforming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5668-5679. [PMID: 38598398 DOI: 10.1109/tnnls.2024.3384314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Ultrasound detection is a potent tool for the clinical diagnosis of various diseases due to its real-time, convenient, and noninvasive qualities. Yet, existing ultrasound beamforming and related methods face a big challenge to improve both the quality and speed of imaging for the required clinical applications. The most notable characteristic of ultrasound signal data is its spatial and temporal features. Because most signals are complex-valued, directly processing them by using real-valued networks leads to phase distortion and inaccurate output. In this study, for the first time, we propose a complex-valued convolutional gated recurrent (CCGR) neural network to handle ultrasound analytic signals with the aforementioned properties. The complex-valued network operations proposed in this study improve the beamforming accuracy of complex-valued ultrasound signals over traditional real-valued methods. Further, the proposed deep integration of convolution and recurrent neural networks makes a great contribution to extracting rich and informative ultrasound signal features. Our experimental results reveal its outstanding imaging quality over existing state-of-the-art methods. More significantly, its ultrafast processing speed of only 0.07 s per image promises considerable clinical application potential. The code is available at https://github.com/zhangzm0128/CCGR.
Collapse
|
2
|
Gundersen EL, Smistad E, Struksnes Jahren T, Masoy SE. Hardware-Independent Deep Signal Processing: A Feasibility Study in Echocardiography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:1491-1500. [PMID: 38781056 DOI: 10.1109/tuffc.2024.3404622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Deep learning (DL) models have emerged as alternative methods to conventional ultrasound (US) signal processing, offering the potential to mimic signal processing chains, reduce inference time, and enable the portability of processing chains across hardware. This article proposes a DL model that replicates the fine-tuned BMode signal processing chain of a high-end US system and explores the potential of using it with a different probe and a lower end system. A deep neural network (DNN) was trained in a supervised manner to map raw beamformed in-phase and quadrature component data into processed images. The dataset consisted of 30 000 cardiac image frames acquired using the GE HealthCare Vivid E95 system with the 4Vc-D matrix array probe. The signal processing chain includes depth-dependent bandpass filtering, elevation compounding, frequency compounding, and image compression and filtering. The results indicate that a lightweight DL model can accurately replicate the signal processing chain of a commercial scanner for a given application. Evaluation on a 15-patient test dataset of about 3000 image frames gave a structural similarity index measure (SSIM) of 98.56 ± 0.49. Applying the DL model to data from another probe showed equivalent or improved image quality. This indicates that a single DL model may be used for a set of probes on a given system that targets the same application, which could be a cost-effective tuning and implementation strategy for vendors. Furthermore, the DL model enhanced image quality on a Verasonics dataset, suggesting the potential to port features from high-end US systems to lower end counterparts.
Collapse
|
3
|
Bosco E, Spairani E, Toffali E, Meacci V, Ramalli A, Matrone G. A Deep Learning Approach for Beamforming and Contrast Enhancement of Ultrasound Images in Monostatic Synthetic Aperture Imaging: A Proof-of-Concept. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:376-382. [PMID: 38899024 PMCID: PMC11186640 DOI: 10.1109/ojemb.2024.3401098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/21/2024] Open
Abstract
Goal: In this study, we demonstrate that a deep neural network (DNN) can be trained to reconstruct high-contrast images, resembling those produced by the multistatic Synthetic Aperture (SA) method using a 128-element array, leveraging pre-beamforming radiofrequency (RF) signals acquired through the monostatic SA approach. Methods: A U-net was trained using 27200 pairs of RF signals, simulated considering a monostatic SA architecture, with their corresponding delay-and-sum beamformed target images in a multistatic 128-element SA configuration. The contrast was assessed on 500 simulated test images of anechoic/hyperechoic targets. The DNN's performance in reconstructing experimental images of a phantom and different in vivo scenarios was tested too. Results: The DNN, compared to the simple monostatic SA approach used to acquire pre-beamforming signals, generated better-quality images with higher contrast and reduced noise/artifacts. Conclusions: The obtained results suggest the potential for the development of a single-channel setup, simultaneously providing good-quality images and reducing hardware complexity.
Collapse
Affiliation(s)
- Edoardo Bosco
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Edoardo Spairani
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Eleonora Toffali
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Valentino Meacci
- Department of Information EngineeringUniversity of Florence50134FlorenceItaly
| | - Alessandro Ramalli
- Department of Information EngineeringUniversity of Florence50134FlorenceItaly
| | - Giulia Matrone
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| |
Collapse
|
4
|
Lu J, Millioz F, Varray F, Poree J, Provost J, Bernard O, Garcia D, Friboulet D. Ultrafast Cardiac Imaging Using Deep Learning for Speckle-Tracking Echocardiography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:1761-1772. [PMID: 37862280 DOI: 10.1109/tuffc.2023.3326377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2023]
Abstract
High-quality ultrafast ultrasound imaging is based on coherent compounding from multiple transmissions of plane waves (PW) or diverging waves (DW). However, compounding results in reduced frame rate, as well as destructive interferences from high-velocity tissue motion if motion compensation (MoCo) is not considered. While many studies have recently shown the interest of deep learning for the reconstruction of high-quality static images from PW or DW, its ability to achieve such performance while maintaining the capability of tracking cardiac motion has yet to be assessed. In this article, we addressed such issue by deploying a complex-weighted convolutional neural network (CNN) for image reconstruction and a state-of-the-art speckle-tracking method. The evaluation of this approach was first performed by designing an adapted simulation framework, which provides specific reference data, i.e., high-quality, motion artifact-free cardiac images. The obtained results showed that, while using only three DWs as input, the CNN-based approach yielded an image quality and a motion accuracy equivalent to those obtained by compounding 31 DWs free of motion artifacts. The performance was then further evaluated on nonsimulated, experimental in vitro data, using a spinning disk phantom. This experiment demonstrated that our approach yielded high-quality image reconstruction and motion estimation, under a large range of velocities and outperforms a state-of-the-art MoCo-based approach at high velocities. Our method was finally assessed on in vivo datasets and showed consistent improvement in image quality and motion estimation compared to standard compounding. This demonstrates the feasibility and effectiveness of deep learning reconstruction for ultrafast speckle-tracking echocardiography.
Collapse
|
5
|
Wasih M, Ahmad S, Almekkawy M. A robust cascaded deep neural network for image reconstruction of single plane wave ultrasound RF data. ULTRASONICS 2023; 132:106981. [PMID: 36913830 DOI: 10.1016/j.ultras.2023.106981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 03/02/2023] [Accepted: 03/03/2023] [Indexed: 05/29/2023]
Abstract
Reconstruction of ultrasound data from single plane wave Radio Frequency (RF) data is a challenging task. The traditional Delay and Sum (DAS) method produces an image with low resolution and contrast, if employed with RF data from only a single plane wave. A Coherent Compounding (CC) method that reconstructs the image by coherently summing the individual DAS images was proposed to enhance the image quality. However, CC relies on a large number of plane waves to accurately sum the individual DAS images, hence it produces high quality images but with low frame rate that may not be suitable for time-demanding applications. Therefore, there is a need for a method that can create a high quality image with higher frame rates. Furthermore, the method needs to be robust against the input transmission angle of the plane wave. To reduce the method's dependence on the input angle, we propose to unify the RF data at different angles by learning a linear data transformation from different angled data to a common, 0° data. We further propose a cascade of two independent neural networks to reconstruct an image, similar in quality to CC, by making use of a single plane wave. The first network, denoted as "PixelNet", is a fully Convolutional Neural Network (CNN) which takes in the transformed time-delayed RF data as input. PixelNet learns optimal pixel weights that get element-wise multiplied with the single angle DAS image. The second network is a conditional Generative Adversarial Network (cGAN) which is used to further enhance the image quality. Our networks were trained on the publicly available PICMUS and CPWC datasets and evaluated on a completely separate, CUBDL dataset obtained from different acquisition settings than the training dataset. The results thus obtained on the testing dataset, demonstrate the networks' ability to generalize well on unseen data, with frame rates better than the CC method. This paves the way for applications that require high-quality images reconstructed at higher frame rates.
Collapse
Affiliation(s)
- Mohammad Wasih
- The Pennsylvania State University, University Park, PA, 16802, USA.
| | - Sahil Ahmad
- The Pennsylvania State University, University Park, PA, 16802, USA.
| | | |
Collapse
|
6
|
Molinier N, Painchaud-April G, Le Duff A, Toews M, Bélanger P. Ultrasonic imaging using conditional generative adversarial networks. ULTRASONICS 2023; 133:107015. [PMID: 37269681 DOI: 10.1016/j.ultras.2023.107015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 03/17/2023] [Accepted: 04/11/2023] [Indexed: 06/05/2023]
Abstract
The Full Matrix Capture (FMC) and Total Focusing Method (TFM) combination is often considered the gold standard in ultrasonic nondestructive testing, however it may be impractical due to the amount of time required to gather and process the FMC, particularly for high cadence inspections. This study proposes replacing conventional FMC acquisition and TFM processing with a single zero-degree plane wave (PW) insonification and a conditional Generative Adversarial Network (cGAN) trained to produce TFM-like images. Three models with different cGAN architectures and loss formulations were tested in different scenarios. Their performances were compared with conventional TFM computed from FMC. The proposed cGANs were able to recreate TFM-like images with the same resolution while improving the contrast in more than 94% of the reconstructions in comparison with conventional TFM reconstructions. Indeed, thanks to the use of a bias in the cGANs' training, the contrast was systematically increased through a reduction of the background noise level and the elimination of some artifacts. Finally, the proposed method led to a reduction of the computation time and file size by a factor of 120 and 75, respectively.
Collapse
Affiliation(s)
- Nathan Molinier
- PULÉTS, École de Technologie Supérieure (ÉTS), Montréal, H3C 1K3, QC, Canada.
| | | | - Alain Le Duff
- Evident Industrial (formerly Olympus IMS), Québec, G1P 0B3, QC, Canada.
| | - Matthew Toews
- Department of Systems Engineering, École de Technologie Supérieure, Université du Québec, Montréal, H3C 1K3, QC, Canada.
| | - Pierre Bélanger
- PULÉTS, École de Technologie Supérieure (ÉTS), Montréal, H3C 1K3, QC, Canada; Department of Mechanical Engineering, École de Technologie Supérieure, Université du Québec, Montréal, H3C 1K3, QC, Canada.
| |
Collapse
|
7
|
Ossenkoppele BW, Luijten B, Bera D, de Jong N, Verweij MD, van Sloun RJG. Improving Lateral Resolution in 3-D Imaging With Micro-beamforming Through Adaptive Beamforming by Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:237-255. [PMID: 36253231 DOI: 10.1016/j.ultrasmedbio.2022.08.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/26/2022] [Accepted: 08/28/2022] [Indexed: 06/16/2023]
Abstract
There is an increased desire for miniature ultrasound probes with small apertures to provide volumetric images at high frame rates for in-body applications. Satisfying these increased requirements makes simultaneous achievement of a good lateral resolution a challenge. As micro-beamforming is often employed to reduce data rate and cable count to acceptable levels, receive processing methods that try to improve spatial resolution will have to compensate the introduced reduction in focusing. Existing beamformers do not realize sufficient improvement and/or have a computational cost that prohibits their use. Here we propose the use of adaptive beamforming by deep learning (ABLE) in combination with training targets generated by a large aperture array, which inherently has better lateral resolution. In addition, we modify ABLE to extend its receptive field across multiple voxels. We illustrate that this method improves lateral resolution both quantitatively and qualitatively, such that image quality is improved compared with that achieved by existing delay-and-sum, coherence factor, filtered-delay-multiplication-and-sum and Eigen-based minimum variance beamformers. We found that only in silica data are required to train the network, making the method easily implementable in practice.
Collapse
Affiliation(s)
| | - Ben Luijten
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | | | - Nico de Jong
- Department of Imaging Physics, Delft University of Technology, Delft, The Netherlands; Department of Cardiology, Erasmus MC Rotterdam, Rotterdam, The Netherlands
| | - Martin D Verweij
- Department of Imaging Physics, Delft University of Technology, Delft, The Netherlands; Department of Cardiology, Erasmus MC Rotterdam, Rotterdam, The Netherlands
| | - Ruud J G van Sloun
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Philips Research, Eindhoven, The Netherlands
| |
Collapse
|
8
|
Goudarzi S, Rivaz H. Deep reconstruction of high-quality ultrasound images from raw plane-wave data: A simulation and in vivo study. ULTRASONICS 2022; 125:106778. [PMID: 35728310 DOI: 10.1016/j.ultras.2022.106778] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 05/27/2022] [Accepted: 05/28/2022] [Indexed: 06/15/2023]
Abstract
This paper presents a novel beamforming approach based on deep learning to get closer to the ideal Point Spread Function (PSF) in Plane-Wave Imaging (PWI). The proposed approach is designed to reconstruct a high-quality version of Tissue Reflectivity Function (TRF) from echo traces acquired by transducer elements using only a single plane-wave transmission. In this approach, first, a model for the TRF is introduced by setting the imaging PSF as an isotropic (i.e., circularly symmetric) 2D Gaussian kernel convolved with a cosine function. Then, a mapping function between the pre-beamformed Radio-Frequency (RF) channel data and the proposed output is constructed using deep learning. Network architecture contains multi-resolution decomposition and reconstruction using wavelet transform for effective recovery of high-frequency content of the desired output. We exploit step by step training from coarse (mean square error) to fine (ℓ0.2) loss functions. The proposed method is trained on 1174 simulation ultrasound data with the ground-truth echogenicity map extracted from real photographic images. The performance of the trained network is evaluated on the publicly available simulation and in vivo test data without any further fine-tuning. Simulation test results show an improvement of 37.5% and 65.8% in terms of axial and lateral resolution as compared to Delay-And-Sum (DAS) results, respectively. The contrast is also improved by 33.7% in comparison to DAS. Furthermore, the reconstructed in vivo images confirm that the trained mapping function does not need any fine-tuning in the new domain. Therefore, the proposed approach maintains high resolution, contrast, and framerate simultaneously.
Collapse
Affiliation(s)
- Sobhan Goudarzi
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada.
| | - Hassan Rivaz
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada
| |
Collapse
|
9
|
Lu JY, Lee PY, Huang CC. Improving Image Quality for Single-Angle Plane Wave Ultrasound Imaging With Convolutional Neural Network Beamformer. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:1326-1336. [PMID: 35175918 DOI: 10.1109/tuffc.2022.3152689] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Ultrafast ultrasound imaging based on plane wave (PW) compounding has been proposed for use in various clinical and preclinical applications, including shear wave imaging and super resolution blood flow imaging. Because the image quality afforded by PW imaging is highly dependent on the number of PW angles used for compounding, a tradeoff between image quality and frame rate occurs. In the present study, a convolutional neural network (CNN) beamformer based on a combination of the GoogLeNet and U-Net architectures was developed to replace the conventional delay-and-sum (DAS) algorithm to obtain high-quality images at a high frame rate. RF channel data are used as the inputs for the CNN beamformers. The outputs are in-phase and quadrature data. Simulations and phantom experiments revealed that the images predicted by the CNN beamformers had higher resolution and contrast than those predicted by conventional single-angle PW imaging with the DAS approach. In in vivo studies, the contrast-to-noise ratios (CNRs) of carotid artery images predicted by the CNN beamformers using three or five PWs as ground truths were approximately 12 dB in the transverse view, considerably higher than the CNR obtained using the DAS beamformer (3.9 dB). Most tissue speckle information was retained in the in vivo images produced by the CNN beamformers. In conclusion, only a single PW at 0° was fired, but the quality of the output image was proximal to that of an image generated using three or five PW angles. In other words, the quality-frame rate tradeoff of coherence compounding could be mitigated through the use of the proposed CNN for beamforming.
Collapse
|
10
|
Lu J, Millioz F, Garcia D, Salles S, Ye D, Friboulet D. Complex Convolutional Neural Networks for Ultrafast Ultrasound Imaging Reconstruction From In-Phase/Quadrature Signal. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:592-603. [PMID: 34767508 DOI: 10.1109/tuffc.2021.3127916] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Ultrafast ultrasound imaging remains an active area of interest in the ultrasound community due to its ultrahigh frame rates. Recently, a wide variety of studies based on deep learning have sought to improve ultrafast ultrasound imaging. Most of these approaches have been performed on radio frequency (RF) signals. However, in- phase/quadrature (I/Q) digital beamformers are now widely used as low-cost strategies. In this work, we used complex convolutional neural networks for reconstruction of ultrasound images from I/Q signals. We recently described a convolutional neural network architecture called ID-Net, which exploited an inception layer designed for reconstruction of RF diverging-wave ultrasound images. In the present study, we derive the complex equivalent of this network, i.e., complex-valued inception for diverging-wave network (CID-Net) that operates on I/Q data. We provide experimental evidence that CID-Net provides the same image quality as that obtained from RF-trained convolutional neural networks, i.e., using only three I/Q images, CID-Net produces high-quality images that can compete with those obtained by coherently compounding 31 RF images. Moreover, we show that CID-Net outperforms the straightforward architecture that consists of processing real and imaginary parts of the I/Q signal separately, which thereby indicates the importance of consistently processing the I/Q signals using a network that exploits the complex nature of such signals.
Collapse
|
11
|
Chen Y, Liu J, Luo X, Luo J. ApodNet: Learning for High Frame Rate Synthetic Transmit Aperture Ultrasound Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3190-3204. [PMID: 34048340 DOI: 10.1109/tmi.2021.3084821] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Two-way dynamic focusing in synthetic transmit aperture (STA) beamforming can benefit high-quality ultrasound imaging with higher lateral spatial resolution and contrast resolution. However, STA requires the complete dataset for beamforming in a relatively low frame rate and transmit power. This paper proposes a deep-learning architecture to achieve high frame rate STA imaging with two-way dynamic focusing. The network consists of an encoder and a joint decoder. The encoder trains a set of binary weights as the apodizations of the high-frame-rate plane wave transmissions. In this respect, we term our network ApodNet. The decoder can recover the complete dataset from the acquired channel data to achieve dynamic transmit focusing. We evaluate the proposed method by simulations at different levels of noise and in-vivo experiments on the human biceps brachii and common carotid artery. The experimental results demonstrate that ApodNet provides a promising strategy for high frame rate STA imaging, obtaining comparable lateral resolution and contrast resolution with four-times higher frame rate than conventional STA imaging in the in-vivo experiments. Particularly, ApodNet improves contrast resolution of the hypoechoic targets with much shorter computational time when compared with other high-frame-rate methods in both simulations and in-vivo experiments.
Collapse
|
12
|
Zhou Z, Guo Y, Wang Y. Ultrasound deep beamforming using a multiconstrained hybrid generative adversarial network. Med Image Anal 2021; 71:102086. [PMID: 33979760 DOI: 10.1016/j.media.2021.102086] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2020] [Revised: 04/13/2021] [Accepted: 04/16/2021] [Indexed: 11/19/2022]
Abstract
Ultrasound beamforming is a principal factor in high-quality ultrasound imaging. The conventional delay-and-sum (DAS) beamformer generates images with high computational speed but low spatial resolution; thus, many adaptive beamforming methods have been introduced to improve image qualities. However, these adaptive beamforming methods suffer from high computational complexity, which limits their practical applications. Hence, an advanced beamformer that can overcome spatiotemporal resolution bottlenecks is eagerly awaited. In this paper, we propose a novel deep-learning-based algorithm, called the multiconstrained hybrid generative adversarial network (MC-HGAN) beamformer that rapidly achieves high-quality ultrasound imaging. The MC-HGAN beamformer directly establishes a one-shot mapping between the radio frequency signals and the reconstructed ultrasound images through a hybrid generative adversarial network (GAN) model. Through two specific branches, the hybrid GAN model extracts both radio frequency-based and image-based features and integrates them through a fusion module. We also introduce a multiconstrained training strategy to provide comprehensive guidance for the network by invoking intermediates to co-constrain the training process. Moreover, our beamformer is designed to adapt to various ultrasonic emission modes, which improves its generalizability for clinical applications. We conducted experiments on a variety of datasets scanned by line-scan and plane wave emission modes and evaluated the results with both similarity-based and ultrasound-specific metrics. The comparisons demonstrate that the MC-HGAN beamformer generates ultrasound images whose quality is higher than that of images generated by other deep learning-based methods and shows very high robustness in different clinical datasets. This technology also shows great potential in real-time imaging.
Collapse
Affiliation(s)
- Zixia Zhou
- Fudan University, Department of Electronic Engineering, Shanghai 200433, China
| | - Yi Guo
- Fudan University, Department of Electronic Engineering, Shanghai 200433, China.
| | - Yuanyuan Wang
- Fudan University, Department of Electronic Engineering, Shanghai 200433, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai 200032, China.
| |
Collapse
|