1
|
Venkatayogi N, Sharma A, Ambinder EB, Myers KS, Oluyemi ET, Mullen LA, Bell MAL. Comparative Assessment of Real-Time and Offline Short-Lag Spatial Coherence Imaging of Ultrasound Breast Masses. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:941-950. [PMID: 40074593 PMCID: PMC12010921 DOI: 10.1016/j.ultrasmedbio.2025.01.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/30/2024] [Revised: 01/15/2025] [Accepted: 01/24/2025] [Indexed: 03/14/2025]
Abstract
OBJECTIVE To perform the first known investigation of differences between real-time and offline B-mode and short-lag spatial coherence (SLSC) images when evaluating fluid or solid content in 60 hypoechoic breast masses. METHODS Real-time and retrospective (i.e., offline) reader studies were conducted with three board-certified breast radiologists, followed by objective, reader-independent discrimination using generalized contrast-to-noise ratio (gCNR). RESULTS The content of 12 fluid, solid and mixed (i.e., containing fluid and solid components) masses were uncertain when reading real-time B-mode images. With real-time and offline SLSC images, 15 and 5, respectively, aggregated solid and mixed masses (and no fluid masses) were uncertain. Therefore, with real-time SLSC imaging, uncertainty about solid masses increased relative to offline SLSC imaging, while uncertainty about fluid masses decreased relative to real-time B-mode imaging. When assessing real-time SLSC reader results, 100% (11/11) of solid masses with uncertain content were correctly classified with a gCNR<0.73 threshold applied to real-time SLSC images. The areas under receiver operator characteristic curves characterizing gCNR as an objective metric to discriminate complicated cysts from solid masses were 0.963 and 0.998 with real-time and offline SLSC images, respectively, which are both considered excellent for diagnostic testing. CONCLUSION Results are promising to support real-time SLSC imaging and gCNR application to real-time SLSC images to enhance sensitivity and specificity, reduce reader variability, and mitigate uncertainty about fluid or solid content, particularly when distinguishing complicated cysts (which are benign) from hypoechoic solid masses (which could be cancerous).
Collapse
Affiliation(s)
- Nethra Venkatayogi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | - Arunima Sharma
- Department of Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Emily B Ambinder
- Department of Radiology & Radiological Science, Johns Hopkins Medicine, Baltimore, MD, USA
| | - Kelly S Myers
- Department of Radiology & Radiological Science, Johns Hopkins Medicine, Baltimore, MD, USA
| | - Eniola T Oluyemi
- Department of Radiology & Radiological Science, Johns Hopkins Medicine, Baltimore, MD, USA
| | - Lisa A Mullen
- Department of Radiology & Radiological Science, Johns Hopkins Medicine, Baltimore, MD, USA
| | - Muyinatu A Lediju Bell
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA; Department of Electrical & Computer Engineering, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
2
|
Zhang J, Bell MAL. Overfit detection method for deep neural networks trained to beamform ultrasound images. ULTRASONICS 2025; 148:107562. [PMID: 39746284 PMCID: PMC11839378 DOI: 10.1016/j.ultras.2024.107562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2024] [Revised: 12/18/2024] [Accepted: 12/20/2024] [Indexed: 01/04/2025]
Abstract
Deep neural networks (DNNs) have remarkable potential to reconstruct ultrasound images. However, this promise can suffer from overfitting to training data, which is typically detected via loss function monitoring during an otherwise time-consuming training process or via access to new sources of test data. We present a method to detect overfitting with associated evaluation approaches that only require knowledge of a network architecture and associated trained weights. Three types of artificial DNN inputs (i.e., zeros, ones, and Gaussian noise), unseen during DNN training, were input to three DNNs designed for ultrasound image formation, trained on multi-site data, and submitted to the Challenge on Ultrasound Beamforming with Deep Learning (CUBDL). Overfitting was detected using these artificial DNN inputs. Qualitative and quantitative comparisons of DNN-created images to ground truth images immediately revealed signs of overfitting (e.g., zeros input produced mean output values ≥0.08, ones input produced mean output values ≤0.07, with corresponding image-to-image normalized correlations ≤0.8). The proposed approach is promising to detect overfitting without requiring lengthy network retraining or the curation of additional test data. Potential applications include sanity checks during federated learning, as well as optimization, security, public policy, regulation creation, and benchmarking.
Collapse
Affiliation(s)
- Jiaxin Zhang
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Muyinatu A Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA; Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
3
|
Zhang Z, Lei Z, Zhou M, Hasegawa H, Gao S. Complex-Valued Convolutional Gated Recurrent Neural Network for Ultrasound Beamforming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5668-5679. [PMID: 38598398 DOI: 10.1109/tnnls.2024.3384314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Ultrasound detection is a potent tool for the clinical diagnosis of various diseases due to its real-time, convenient, and noninvasive qualities. Yet, existing ultrasound beamforming and related methods face a big challenge to improve both the quality and speed of imaging for the required clinical applications. The most notable characteristic of ultrasound signal data is its spatial and temporal features. Because most signals are complex-valued, directly processing them by using real-valued networks leads to phase distortion and inaccurate output. In this study, for the first time, we propose a complex-valued convolutional gated recurrent (CCGR) neural network to handle ultrasound analytic signals with the aforementioned properties. The complex-valued network operations proposed in this study improve the beamforming accuracy of complex-valued ultrasound signals over traditional real-valued methods. Further, the proposed deep integration of convolution and recurrent neural networks makes a great contribution to extracting rich and informative ultrasound signal features. Our experimental results reveal its outstanding imaging quality over existing state-of-the-art methods. More significantly, its ultrafast processing speed of only 0.07 s per image promises considerable clinical application potential. The code is available at https://github.com/zhangzm0128/CCGR.
Collapse
|
4
|
Cui XW, Goudie A, Blaivas M, Chai YJ, Chammas MC, Dong Y, Stewart J, Jiang TA, Liang P, Sehgal CM, Wu XL, Hsieh PCC, Adrian S, Dietrich CF. WFUMB Commentary Paper on Artificial intelligence in Medical Ultrasound Imaging. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:428-438. [PMID: 39672681 DOI: 10.1016/j.ultrasmedbio.2024.10.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 10/24/2024] [Accepted: 10/31/2024] [Indexed: 12/15/2024]
Abstract
Artificial intelligence (AI) is defined as the theory and development of computer systems able to perform tasks normally associated with human intelligence. At present, AI has been widely used in a variety of ultrasound tasks, including in point-of-care ultrasound, echocardiography, and various diseases of different organs. However, the characteristics of ultrasound, compared to other imaging modalities, such as computed tomography (CT) and magnetic resonance imaging (MRI), poses significant additional challenges to AI. Application of AI can not only reduce variability during ultrasound image acquisition, but can standardize these interpretations and identify patterns that escape the human eye and brain. These advances have enabled greater innovations in ultrasound AI applications that can be applied to a variety of clinical settings and disease states. Therefore, The World Federation of Ultrasound in Medicine and Biology (WFUMB) is addressing the topic with a brief and practical overview of current and potential future AI applications in medical ultrasound, as well as discuss some current limitations and future challenges to AI implementation.
Collapse
Affiliation(s)
- Xin Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College and State Key Laboratory for Diagnosis and Treatment of Severe Zoonotic Infectious Diseases, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | - Adrian Goudie
- Department of Emergency, Fiona Stanley Hospital, Perth, Australia
| | - Michael Blaivas
- Department of Medicine, University of South Carolina School of Medicine, Columbia, SC, USA
| | - Young Jun Chai
- Department of Surgery, Seoul National University College of Medicine, Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul, Republic of Korea
| | - Maria Cristina Chammas
- Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo, São Paulo, Brazil
| | - Yi Dong
- Department of Ultrasound, Xinhua Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jonathon Stewart
- School of Medicine, The University of Western Australia, Perth, Western Australia, Australia
| | - Tian-An Jiang
- Department of Ultrasound Medicine, The First Affiliated Hospital of Zhejiang University School of Medicine, Hangzhou, Zhejiang, China
| | - Ping Liang
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, China
| | - Chandra M Sehgal
- Ultrasound Research Lab, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, United States of America
| | - Xing-Long Wu
- School of Computer Science & Engineering, Wuhan Institute of Technology, Wuhan, Hubei, China
| | | | - Saftoiu Adrian
- Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy of Craiova, Craiova, Romania
| | - Christoph F Dietrich
- Department General Internal Medicine (DAIM), Hospitals Hirslanden Bern Beau Site, Salem and Permanence, Bern, Switzerland.
| |
Collapse
|
5
|
Lian Y, Zeng Y, Zhou S, Zhu H, Li F, Cai X. Deep Beamforming for Real-Time 3-D Passive Acoustic Mapping With Row-Column-Addressed Arrays. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2025; 72:226-237. [PMID: 40030804 DOI: 10.1109/tuffc.2024.3524436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Passive acoustic mapping (PAM) is a promising tool to monitor acoustic cavitation activities for focused ultrasound (FUS) therapies. While 2-D matrix arrays allow 3-D PAM, the high channel count requirement and the complexity of the receiving electronics limit their practical value in real-time imaging applications. In this regard, row-column-addressed (RCA) arrays have shown great potential in addressing the difficulties in real-time 3-D ultrasound imaging. However, currently, there is no applicable method for 3-D PAM with RCA arrays. In this work, we propose a deep beamformer for real-time 3-D PAM with RCA arrays. The deep beamformer leverages a deep neural network (DNN) to map radio frequency (RF) microbubble (MB) cavitation signals acquired with the RCA array to 3-D PAM images, achieving similar image quality to the reconstructions performed using the fully populated 2-D matrix array with the angular spectrum (AS) method. In the simulation, the images reconstructed by the deep beamformer showed less than 13.2% and 1.8% differences in the energy spread volume (ESV) and image signal-to-noise ratio (ISNR), compared with those reconstructed using the matrix array. However, the image reconstruction time was reduced by 11 and 30 times on the CPU and GPU, respectively, achieving 42.4 volumes per second image reconstruction speed on a GPU for a volume sized $128\times 128\times 128$ . Experimental data further validated the capabilities of the deep beamformer to accurately localize MB cavitation activities in 3-D space. These results clearly demonstrated the feasibility of real-time and 3-D monitoring of MB cavitation activities with RCA arrays and neural network-based beamformers.
Collapse
|
6
|
Xiao D, Yu ACH. Beamforming-integrated neural networks for ultrasound imaging. ULTRASONICS 2025; 145:107474. [PMID: 39378772 DOI: 10.1016/j.ultras.2024.107474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 08/18/2024] [Accepted: 09/13/2024] [Indexed: 10/10/2024]
Abstract
Sparse matrix beamforming (SMB) is a computationally efficient reformulation of delay-and-sum (DAS) beamforming as a single sparse matrix multiplication. This reformulation can potentially dovetail with machine learning platforms like TensorFlow and PyTorch that already support sparse matrix operations. In this work, using SMB principles, we present the development of beamforming-integrated neural networks (BINNs) that can rationally infer ultrasound images directly from pre-beamforming channel-domain radiofrequency (RF) datasets. To demonstrate feasibility, a toy BINN was first designed with two 2D-convolution layers that were respectively placed both before and after an SMB layer. This toy BINN correctly updated kernel weights in all convolution layers, demonstrating efficiency in both training (PyTorch - 133 ms, TensorFlow - 22 ms) and inference (PyTorch - 4 ms, TensorFlow - 5 ms). As an application demonstration, another BINN with two RF-domain convolution layers, an SMB layer, and three image-domain convolution layers was designed to infer high-quality B-mode images in vivo from single-shot plane-wave channel RF data. When trained using 31-angle compounded plane wave images (3000 frames from 22 human volunteers), this BINN showed mean-square logarithmic error improvements of 21.3 % and 431 % in the inferred B-mode image quality respectively comparing to an image-to-image convolutional neural network (CNN) and an RF-to-image CNN with the same number of layers and learnable parameters (3,777). Overall, by including an SMB layer to adopt prior knowledge of DAS beamforming, BINN shows potential as a new type of informed machine learning framework for ultrasound imaging.
Collapse
Affiliation(s)
- Di Xiao
- Schlegel-UW Research Institute for Aging, University of Waterloo, Waterloo, Canada
| | - Alfred C H Yu
- Schlegel-UW Research Institute for Aging, University of Waterloo, Waterloo, Canada.
| |
Collapse
|
7
|
Si M, Wu M, Wang Q. RADD-CycleGAN: unsupervised reconstruction of high-quality ultrasound image based on CycleGAN with residual attention and dual-domain discrimination. Phys Med Biol 2024; 69:245018. [PMID: 39622175 DOI: 10.1088/1361-6560/ad997f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Accepted: 12/02/2024] [Indexed: 12/18/2024]
Abstract
Plane wave (PW) imaging is fast, but limited by poor imaging quality. Coherent PW compounding (CPWC) improves image quality but decrease frame rate. In this study, we propose a modified CycleGAN model that combines a residual attention module with a space-frequency dual-domain discriminator, termed RADD-CycleGAN, to rapidly reconstruct high-quality ultrasound images. To enhance the ability to reconstruct image details, we specially design a process of hybrid dynamic and static channel selection followed by the frequency domain discriminator. The low-quality images are generated by the 3-angle CPWC, while the high-quality images are generated as real images (ground truth) by the 75-angle CPWC. The training set includes unpaired images, whereas the images in the test set are paired to verify the validity and superiority of the proposed model. Finally, we respectively design ablation and comparison experiments to evaluate the model performance. Compared with the basic CycleGAN, our proposed method reaches a better performance, with a 7.8% increase in the peak signal-to-noise ratio and a 22.2% increase in the structural similarity index measure. The experimental results show that our method achieves the best unsupervised reconstruction from low quality images in comparison with several state-of-the-art methods.
Collapse
Affiliation(s)
- Mateng Si
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | - Musheng Wu
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| | - Qing Wang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
- Guangdong Provincial Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
- Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, Guangdong 510515, People's Republic of China
| |
Collapse
|
8
|
Gundersen EL, Smistad E, Struksnes Jahren T, Masoy SE. Hardware-Independent Deep Signal Processing: A Feasibility Study in Echocardiography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:1491-1500. [PMID: 38781056 DOI: 10.1109/tuffc.2024.3404622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Deep learning (DL) models have emerged as alternative methods to conventional ultrasound (US) signal processing, offering the potential to mimic signal processing chains, reduce inference time, and enable the portability of processing chains across hardware. This article proposes a DL model that replicates the fine-tuned BMode signal processing chain of a high-end US system and explores the potential of using it with a different probe and a lower end system. A deep neural network (DNN) was trained in a supervised manner to map raw beamformed in-phase and quadrature component data into processed images. The dataset consisted of 30 000 cardiac image frames acquired using the GE HealthCare Vivid E95 system with the 4Vc-D matrix array probe. The signal processing chain includes depth-dependent bandpass filtering, elevation compounding, frequency compounding, and image compression and filtering. The results indicate that a lightweight DL model can accurately replicate the signal processing chain of a commercial scanner for a given application. Evaluation on a 15-patient test dataset of about 3000 image frames gave a structural similarity index measure (SSIM) of 98.56 ± 0.49. Applying the DL model to data from another probe showed equivalent or improved image quality. This indicates that a single DL model may be used for a set of probes on a given system that targets the same application, which could be a cost-effective tuning and implementation strategy for vendors. Furthermore, the DL model enhanced image quality on a Verasonics dataset, suggesting the potential to port features from high-end US systems to lower end counterparts.
Collapse
|
9
|
Yang Y, Duan H, Zheng Y. Improved Transcranial Plane-Wave Imaging With Learned Speed-of-Sound Maps. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2191-2201. [PMID: 38271172 DOI: 10.1109/tmi.2024.3358307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
Although transcranial ultrasound plane-wave imaging (PWI) has promising clinical application prospects, studies have shown that variable speed-of-sound (SoS) would seriously damage the quality of ultrasound images. The mismatch between the conventional constant velocity assumption and the actual SoS distribution leads to the general blurring of ultrasound images. The optimization scheme for reconstructing transcranial ultrasound image is often solved using iterative methods like full-waveform inversion. These iterative methods are computationally expensive and based on prior magnetic resonance imaging (MRI) or computed tomography (CT) information. In contrast, the multi-stencils fast marching (MSFM) method can produce accurate time travel maps for the skull with heterogeneous acoustic speed. In this study, we first propose a convolutional neural network (CNN) to predict SoS maps of the skull from PWI channel data. Then, use these maps to correct the travel time to reduce transcranial aberration. To validate the performance of the proposed method, numerical, phantom and intact human skull studies were conducted using a linear array transducer (L11-5v, 128 elements, pitch = 0.3 mm). Numerical simulations demonstrate that for point targets, the lateral resolution of MSFM-restored images increased by 65%, and the center position shift decreased by 89%. For the cyst targets, the eccentricity of the fitting ellipse decreased by 75%, and the center position shift decreased by 58%. In the phantom study, the lateral resolution of MSFM-restored images was increased by 49%, and the position shift was reduced by 1.72 mm. This pipeline, termed AutoSoS, thus shows the potential to correct distortions in real-time transcranial ultrasound imaging, as demonstrated by experiments on the intact human skull.
Collapse
|
10
|
Bosco E, Spairani E, Toffali E, Meacci V, Ramalli A, Matrone G. A Deep Learning Approach for Beamforming and Contrast Enhancement of Ultrasound Images in Monostatic Synthetic Aperture Imaging: A Proof-of-Concept. IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY 2024; 5:376-382. [PMID: 38899024 PMCID: PMC11186640 DOI: 10.1109/ojemb.2024.3401098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2023] [Revised: 03/29/2024] [Accepted: 05/08/2024] [Indexed: 06/21/2024] Open
Abstract
Goal: In this study, we demonstrate that a deep neural network (DNN) can be trained to reconstruct high-contrast images, resembling those produced by the multistatic Synthetic Aperture (SA) method using a 128-element array, leveraging pre-beamforming radiofrequency (RF) signals acquired through the monostatic SA approach. Methods: A U-net was trained using 27200 pairs of RF signals, simulated considering a monostatic SA architecture, with their corresponding delay-and-sum beamformed target images in a multistatic 128-element SA configuration. The contrast was assessed on 500 simulated test images of anechoic/hyperechoic targets. The DNN's performance in reconstructing experimental images of a phantom and different in vivo scenarios was tested too. Results: The DNN, compared to the simple monostatic SA approach used to acquire pre-beamforming signals, generated better-quality images with higher contrast and reduced noise/artifacts. Conclusions: The obtained results suggest the potential for the development of a single-channel setup, simultaneously providing good-quality images and reducing hardware complexity.
Collapse
Affiliation(s)
- Edoardo Bosco
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Edoardo Spairani
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Eleonora Toffali
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| | - Valentino Meacci
- Department of Information EngineeringUniversity of Florence50134FlorenceItaly
| | - Alessandro Ramalli
- Department of Information EngineeringUniversity of Florence50134FlorenceItaly
| | - Giulia Matrone
- Department of Electrical, Computer and Biomedical EngineeringUniversity of Pavia27100PaviaItaly
| |
Collapse
|
11
|
China D, Feng Z, Hooshangnejad H, Sforza D, Vagdargi P, Bell MAL, Uneri A, Sisniega A, Ding K. FLEX: FLexible Transducer With External Tracking for Ultrasound Imaging With Patient-Specific Geometry Estimation. IEEE Trans Biomed Eng 2024; 71:1298-1307. [PMID: 38048239 PMCID: PMC10998498 DOI: 10.1109/tbme.2023.3333216] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/06/2023]
Abstract
Flexible array transducers can adapt to patient-specific geometries during real-time ultrasound (US) image-guided therapy monitoring. This makes the system radiation-free and less user-dependency. Precise estimation of the flexible transducer's geometry is crucial for the delay-and-sum (DAS) beamforming algorithm to reconstruct B-mode US images. The primary innovation of this research is to build a system named FLexible transducer with EXternal tracking (FLEX) to estimate the position of each element of the flexible transducer and reconstruct precise US images. FLEX utilizes customized optical markers and a tracker to monitor the probe's geometry, employing a polygon fitting algorithm to estimate the position and azimuth angle of each transducer element. Subsequently, the traditional DAS algorithm processes the delay estimation from the tracked element position, reconstructing US images from radio-frequency (RF) channel data. The proposed method underwent evaluation on phantoms and cadaveric specimens, demonstrating its clinical feasibility. Deviations in tracked probe geometry compared to ground truth were minimal, measuring 0.50 ± 0.29 mm for the CIRS phantom, 0.54 ± 0.35 mm for the deformable phantom, and 0.36 ± 0.24 mm on the cadaveric specimen. Reconstructing the US image using tracked probe geometry significantly outperformed the untracked geometry, as indicated by a Dice score of 95.1 ± 3.3% versus 62.3 ± 9.2% for the CIRS phantom. The proposed method achieved high accuracy (<0.5 mm error) in tracking the element position for various random curvatures applicable for clinical deployment. The evaluation results show that the radiation-free proposed method can effectively reconstruct US images and assist in monitoring image-guided therapy with minimal user dependency.
Collapse
|
12
|
Zhao L, Fong TC, Bell MAL. Detection of COVID-19 features in lung ultrasound images using deep neural networks. COMMUNICATIONS MEDICINE 2024; 4:41. [PMID: 38467808 PMCID: PMC10928066 DOI: 10.1038/s43856-024-00463-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 02/16/2024] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Deep neural networks (DNNs) to detect COVID-19 features in lung ultrasound B-mode images have primarily relied on either in vivo or simulated images as training data. However, in vivo images suffer from limited access to required manual labeling of thousands of training image examples, and simulated images can suffer from poor generalizability to in vivo images due to domain differences. We address these limitations and identify the best training strategy. METHODS We investigated in vivo COVID-19 feature detection with DNNs trained on our carefully simulated datasets (40,000 images), publicly available in vivo datasets (174 images), in vivo datasets curated by our team (958 images), and a combination of simulated and internal or external in vivo datasets. Seven DNN training strategies were tested on in vivo B-mode images from COVID-19 patients. RESULTS Here, we show that Dice similarity coefficients (DSCs) between ground truth and DNN predictions are maximized when simulated data are mixed with external in vivo data and tested on internal in vivo data (i.e., 0.482 ± 0.211), compared with using only simulated B-mode image training data (i.e., 0.464 ± 0.230) or only external in vivo B-mode training data (i.e., 0.407 ± 0.177). Additional maximization is achieved when a separate subset of the internal in vivo B-mode images are included in the training dataset, with the greatest maximization of DSC (and minimization of required training time, or epochs) obtained after mixing simulated data with internal and external in vivo data during training, then testing on the held-out subset of the internal in vivo dataset (i.e., 0.735 ± 0.187). CONCLUSIONS DNNs trained with simulated and in vivo data are promising alternatives to training with only real or only simulated data when segmenting in vivo COVID-19 lung ultrasound features.
Collapse
Affiliation(s)
- Lingyi Zhao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Tiffany Clair Fong
- Department of Emergency Medicine, Johns Hopkins Medicine, Baltimore, MD, USA
| | - Muyinatu A Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA.
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, USA.
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
| |
Collapse
|
13
|
Pathour T, Ma L, Strand DW, Gahan J, Johnson BA, Sirsi SR, Fei B. Feature Extraction of Ultrasound Radiofrequency Data for the Classification of the Peripheral Zone of Human Prostate. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2024; 12932:129321F. [PMID: 38707197 PMCID: PMC11069342 DOI: 10.1117/12.3008643] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2024]
Abstract
Prostate cancer ranks among the most prevalent types of cancer in males, prompting a demand for early detection and noninvasive diagnostic techniques. This paper explores the potential of ultrasound radiofrequency (RF) data to study different anatomic zones of the prostate. The study leverages RF data's capacity to capture nuanced acoustic information from clinical transducers. The research focuses on the peripheral zone due to its high susceptibility to cancer. The feasibility of utilizing RF data for classification is evaluated using ex-vivo whole prostate specimens from human patients. Ultrasound data, acquired using a phased array transducer, is processed, and correlated with B-mode images. A range filter is applied to highlight the peripheral zone's distinct features, observed in both RF data and 3D plots. Radiomic features were extracted from RF data to enhance tissue characterization and segmentation. The study demonstrated RF data's ability to differentiate tissue structures and emphasizes its potential for prostate tissue classification, addressing the current limitations of ultrasound imaging for prostate management. These findings advocate for the integration of RF data into ultrasound diagnostics, potentially transforming prostate cancer diagnosis and management in the future.
Collapse
Affiliation(s)
- Teja Pathour
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, TX
- Department of Bioengineering, The University of Texas at Dallas, TX
| | - Ling Ma
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, TX
- Department of Bioengineering, The University of Texas at Dallas, TX
| | - Douglas W. Strand
- Department of Urology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Jeffrey Gahan
- Department of Urology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Brett A. Johnson
- Department of Urology, The University of Texas Southwestern Medical Center, Dallas, TX
| | - Shashank R. Sirsi
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, TX
- Department of Bioengineering, The University of Texas at Dallas, TX
| | - Baowei Fei
- Center for Imaging and Surgical Innovation, The University of Texas at Dallas, TX
- Department of Bioengineering, The University of Texas at Dallas, TX
- Department of Radiology, The University of Texas Southwestern Medical Center, Dallas, TX
| |
Collapse
|
14
|
Sharma A, Oluyemi E, Myers K, Ambinder E, Bell MAL. Spatial Coherence Approaches to Distinguish Suspicious Mass Contents in Fundamental and Harmonic Breast Ultrasound Images. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2024; 71:70-84. [PMID: 37956000 PMCID: PMC10851341 DOI: 10.1109/tuffc.2023.3332207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
When compared to fundamental B-mode imaging, coherence-based beamforming, and harmonic imaging are independently known to reduce acoustic clutter, distinguish solid from fluid content in indeterminate breast masses, and thereby reduce unnecessary biopsies during a breast cancer diagnosis. However, a systematic investigation of independent and combined coherence beamforming and harmonic imaging approaches is necessary for the clinical deployment of the most optimal approach. Therefore, we compare the performance of fundamental and harmonic images created with short-lag spatial coherence (SLSC), M-weighted SLSC (M-SLSC), SLSC combined with robust principal component analysis with no M-weighting (r-SLSC), and r-SLSC with M-weighting (R-SLSC), relative to traditional fundamental and harmonic B-mode images, when distinguishing solid from fluid breast masses. Raw channel data acquired from 40 total breast masses (28 solid, 7 fluid, 5 mixed) were beamformed and analyzed. The contrast of fluid masses was better with fundamental rather than harmonic coherence imaging, due to the lower spatial coherence within the fluid masses in the fundamental coherence images. Relative to SLSC imaging, M-SLSC, r-SLSC, and R-SLSC imaging provided similar contrast across multiple masses (with the exception of clinically challenging complicated cysts) and minimized the range of generalized contrast-to-noise ratios (gCNRs) of fluid masses, yet required additional computational resources. Among the eight coherence imaging modes compared, fundamental SLSC imaging best identified fluid versus solid breast mass contents, outperforming fundamental and harmonic B-mode imaging. With fundamental SLSC images, the specificity and sensitivity to identify fluid masses using the reader-independent metrics of contrast difference, mean lag one coherence (LOC), and gCNR were 0.86 and 1, 1 and 0.89, and 1 and 1, respectively. Results demonstrate that fundamental SLSC imaging and gCNR (or LOC if no coherence image or background region of interest is introduced) have the greatest potential to impact clinical decisions and improve the diagnostic certainty of breast mass contents. These observations are additionally anticipated to extend to masses in other organs.
Collapse
|
15
|
Lyu Y, Jiang X, Xu Y, Hou J, Zhao X, Zhu X. ARU-GAN: U-shaped GAN based on Attention and Residual connection for super-resolution reconstruction. Comput Biol Med 2023; 164:107316. [PMID: 37595521 DOI: 10.1016/j.compbiomed.2023.107316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 06/22/2023] [Accepted: 08/07/2023] [Indexed: 08/20/2023]
Abstract
Plane-wave ultrasound imaging technology offers high-speed imaging but lacks image quality. To improve the image spatial resolution, beam synthesis methods are used, which often compromise the temporal resolution. Herein, we propose ARU-GAN, a super-resolution reconstruction model based on residual connectivity and attention mechanisms, to address this issue. ARU-GAN comprises a Full-scale Skip-connection U-shaped Generator (FSUG) with an attention mechanism and a Residual Attention Patch Discriminator (RAPD). The former captures global and local features of the image by using full-scale skip-connections and attention mechanisms. The latter focuses on changes in the image at different scales to enhance its discriminative ability at the patch level. ARU-GAN was trained using a combined loss function on the Plane-Wave Imaging Challenge in Medical Ultrasound (PICMUS) 2016 dataset, which includes three types of targets: point targets, cyst targets, and in-vivo targets. Compared to Coherent Plane-Wave Compounding (CPWC), ARU-GAN achieved a reduction in Full Width at Half Maximum (FWHM) by 5.78%-20.30% on point targets, improved Contrast (CR) by 7.59-11.29 percentage points, and Contrast to Noise Ratio (CNR) by 30.58%-45.22% on cyst targets. On in-vivo target, ARU-GAN improved the Peak Signal-to-Noise Ratio (PSNR) by 11.94%, the Complex-Wavelet Structural Similarity Index Measurement (CW-SSIM) by 17.11%, and the Normalized Cross Correlation (NCC) by at least 2.17% compared to existing deep learning methods. In conclusion, ARU-GAN is a promising model for the super-resolution reconstruction of plane-wave medical ultrasound images. It provides a novel solution for improving image quality, which is essential for clinical practice.
Collapse
Affiliation(s)
- Yuchao Lyu
- College of Information Science and Technology, Qingdao University of Science and Technology, Qingdao, Shandong, 266061, China.
| | - Xi Jiang
- College of Information Science and Technology, Qingdao University of Science and Technology, Qingdao, Shandong, 266061, China.
| | - Yinghao Xu
- College of Information Science and Technology, Qingdao University of Science and Technology, Qingdao, Shandong, 266061, China.
| | - Junyi Hou
- College of Information Science and Technology, Qingdao University of Science and Technology, Qingdao, Shandong, 266061, China.
| | - Xiaoyan Zhao
- College of Information Science and Technology, Qingdao University of Science and Technology, Qingdao, Shandong, 266061, China.
| | - Xijun Zhu
- College of Information Science and Technology, Qingdao University of Science and Technology, Qingdao, Shandong, 266061, China.
| |
Collapse
|
16
|
Goudarzi S, Whyte J, Boily M, Towers A, Kilgour RD, Rivaz H. Segmentation of Arm Ultrasound Images in Breast Cancer-Related Lymphedema: A Database and Deep Learning Algorithm. IEEE Trans Biomed Eng 2023; 70:2552-2563. [PMID: 37028332 DOI: 10.1109/tbme.2023.3253646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023]
Abstract
OBJECTIVE Breast cancer treatment often causes the removal of or damage to lymph nodes of the patient's lymphatic drainage system. This side effect is the origin of Breast Cancer-Related Lymphedema (BCRL), referring to a noticeable increase in excess arm volume. Ultrasound imaging is a preferred modality for the diagnosis and progression monitoring of BCRL because of its low cost, safety, and portability. As the affected and unaffected arms look similar in B-mode ultrasound images, the thickness of the skin, subcutaneous fat, and muscle have been shown to be important biomarkers for this task. The segmentation masks are also helpful in monitoring the longitudinal changes in morphology and mechanical properties of tissue layers. METHODS For the first time, a publicly available ultrasound dataset containing the Radio-Frequency (RF) data of 39 subjects and manual segmentation masks by two experts, are provided. Inter- and intra-observer reproducibility studies performed on the segmentation maps show a high Dice Score Coefficient (DSC) of 0.94±0.08 and 0.92±0.06, respectively. Gated Shape Convolutional Neural Network (GSCNN) is modified for precise automatic segmentation of tissue layers, and its generalization performance is improved by the CutMix augmentation strategy. RESULTS We got an average DSC of 0.87±0.11 on the test set, which confirms the high performance of the method. CONCLUSION Automatic segmentation can pave the way for convenient and accessible staging of BCRL, and our dataset can facilitate development and validation of those methods. SIGNIFICANCE Timely diagnosis and treatment of BCRL have crucial importance in preventing irreversible damage.
Collapse
|
17
|
Mamalakis M, Garg P, Nelson T, Lee J, Swift AJ, Wild JM, Clayton RH. Artificial Intelligence framework with traditional computer vision and deep learning approaches for optimal automatic segmentation of left ventricle with scar. Artif Intell Med 2023; 143:102610. [PMID: 37673578 DOI: 10.1016/j.artmed.2023.102610] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 05/17/2023] [Accepted: 06/06/2023] [Indexed: 09/08/2023]
Abstract
Automatic segmentation of the cardiac left ventricle with scars remains a challenging and clinically significant task, as it is essential for patient diagnosis and treatment pathways. This study aimed to develop a novel framework and cost function to achieve optimal automatic segmentation of the left ventricle with scars using LGE-MRI images. To ensure the generalization of the framework, an unbiased validation protocol was established using out-of-distribution (OOD) internal and external validation cohorts, and intra-observation and inter-observer variability ground truths. The framework employs a combination of traditional computer vision techniques and deep learning, to achieve optimal segmentation results. The traditional approach uses multi-atlas techniques, active contours, and k-means methods, while the deep learning approach utilizes various deep learning techniques and networks. The study found that the traditional computer vision technique delivered more accurate results than deep learning, except in cases where there was breath misalignment error. The optimal solution of the framework achieved robust and generalized results with Dice scores of 82.8 ± 6.4% and 72.1 ± 4.6% in the internal and external OOD cohorts, respectively. The developed framework offers a high-performance solution for automatic segmentation of the left ventricle with scars using LGE-MRI. Unlike existing state-of-the-art approaches, it achieves unbiased results across different hospitals and vendors without the need for training or tuning in hospital cohorts. This framework offers a valuable tool for experts to accomplish the task of fully automatic segmentation of the left ventricle with scars based on a single-modality cardiac scan.
Collapse
Affiliation(s)
- Michail Mamalakis
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK.
| | - Pankaj Garg
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Tom Nelson
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Justin Lee
- Department of Cardiology, Sheffield Teaching Hospitals Sheffield S5 7AU, UK
| | - Andrew J Swift
- Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK; Department of Infection, Immunity & Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - James M Wild
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Polaris, Imaging Sciences, Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - Richard H Clayton
- Insigneo Institute for in-silico, Medicine, University of Sheffield, Sheffield, S1 4DP, UK; Department of Computer Science, University of Sheffield, Regent Court, Sheffield, S1 4DP, UK.
| |
Collapse
|
18
|
Peng T, Gu Y, Zhang J, Dong Y, DI G, Wang W, Zhao J, Cai J. A Robust and Explainable Structure-Based Algorithm for Detecting the Organ Boundary From Ultrasound Multi-Datasets. J Digit Imaging 2023; 36:1515-1532. [PMID: 37231289 PMCID: PMC10406792 DOI: 10.1007/s10278-023-00839-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Revised: 04/19/2023] [Accepted: 04/20/2023] [Indexed: 05/27/2023] Open
Abstract
Detecting the organ boundary in an ultrasound image is challenging because of the poor contrast of ultrasound images and the existence of imaging artifacts. In this study, we developed a coarse-to-refinement architecture for multi-organ ultrasound segmentation. First, we integrated the principal curve-based projection stage into an improved neutrosophic mean shift-based algorithm to acquire the data sequence, for which we utilized a limited amount of prior seed point information as the approximate initialization. Second, a distribution-based evolution technique was designed to aid in the identification of a suitable learning network. Then, utilizing the data sequence as the input of the learning network, we achieved the optimal learning network after learning network training. Finally, a scaled exponential linear unit-based interpretable mathematical model of the organ boundary was expressed via the parameters of a fraction-based learning network. The experimental outcomes indicated that our algorithm 1) achieved more satisfactory segmentation outcomes than state-of-the-art algorithms, with a Dice score coefficient value of 96.68 ± 2.2%, a Jaccard index value of 95.65 ± 2.16%, and an accuracy of 96.54 ± 1.82% and 2) discovered missing or blurry areas.
Collapse
Affiliation(s)
- Tao Peng
- School of Future Science and Engineering, Soochow University, Suzhou, China
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong, China
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX USA
| | - Yidong Gu
- School of Future Science and Engineering, Soochow University, Suzhou, China
- Department of Medical Ultrasound, the Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou Municipal Hospital, Suzhou, Jiangsu China
| | - Ji Zhang
- Department of Radiology, The Affiliated Taizhou People’s Hospital of Nanjing Medical University, Taizhou, Jiangsu Province, China
| | - Yan Dong
- Department of Ultrasonography, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu Province, China
| | - Gongye DI
- Department of Ultrasonic, The Affiliated Taizhou People’s Hospital of Nanjing Medical University, Taizhou, Jiangsu Province, China
| | - Wenjie Wang
- Department of Radio-Oncology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou Municipal Hospital, Suzhou, Jiangsu China
| | - Jing Zhao
- Department of Ultrasound, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Jing Cai
- Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
19
|
Wasih M, Ahmad S, Almekkawy M. A robust cascaded deep neural network for image reconstruction of single plane wave ultrasound RF data. ULTRASONICS 2023; 132:106981. [PMID: 36913830 DOI: 10.1016/j.ultras.2023.106981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Revised: 03/02/2023] [Accepted: 03/03/2023] [Indexed: 05/29/2023]
Abstract
Reconstruction of ultrasound data from single plane wave Radio Frequency (RF) data is a challenging task. The traditional Delay and Sum (DAS) method produces an image with low resolution and contrast, if employed with RF data from only a single plane wave. A Coherent Compounding (CC) method that reconstructs the image by coherently summing the individual DAS images was proposed to enhance the image quality. However, CC relies on a large number of plane waves to accurately sum the individual DAS images, hence it produces high quality images but with low frame rate that may not be suitable for time-demanding applications. Therefore, there is a need for a method that can create a high quality image with higher frame rates. Furthermore, the method needs to be robust against the input transmission angle of the plane wave. To reduce the method's dependence on the input angle, we propose to unify the RF data at different angles by learning a linear data transformation from different angled data to a common, 0° data. We further propose a cascade of two independent neural networks to reconstruct an image, similar in quality to CC, by making use of a single plane wave. The first network, denoted as "PixelNet", is a fully Convolutional Neural Network (CNN) which takes in the transformed time-delayed RF data as input. PixelNet learns optimal pixel weights that get element-wise multiplied with the single angle DAS image. The second network is a conditional Generative Adversarial Network (cGAN) which is used to further enhance the image quality. Our networks were trained on the publicly available PICMUS and CPWC datasets and evaluated on a completely separate, CUBDL dataset obtained from different acquisition settings than the training dataset. The results thus obtained on the testing dataset, demonstrate the networks' ability to generalize well on unseen data, with frame rates better than the CC method. This paves the way for applications that require high-quality images reconstructed at higher frame rates.
Collapse
Affiliation(s)
- Mohammad Wasih
- The Pennsylvania State University, University Park, PA, 16802, USA.
| | - Sahil Ahmad
- The Pennsylvania State University, University Park, PA, 16802, USA.
| | | |
Collapse
|
20
|
Seoni S, Matrone G, Meiburger KM. Texture analysis of ultrasound images obtained with different beamforming techniques and dynamic ranges - A robustness study. ULTRASONICS 2023; 131:106940. [PMID: 36791530 DOI: 10.1016/j.ultras.2023.106940] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Revised: 01/26/2023] [Accepted: 01/29/2023] [Indexed: 06/18/2023]
Abstract
Texture analysis of medical images gives quantitative information about the tissue characterization for possible pathology discrimination. Ultrasound B-mode images are generated through a process called beamforming. Then, to obtain the final 8-bit image, the dynamic range value must be set. It is currently unknown how different beamforming techniques or dynamic range values may alter the final image texture. We provide here a robustness analysis of first and higher order texture features using six beamforming methods and seven dynamic range values, on experimental phantom and in vivo musculoskeletal images acquired using two different ultrasound research scanners. To investigate the repeatability of the texture parameters, we applied the multivariate analysis of variance (MANOVA) and estimated the intraclass correlation coefficient (ICC) on the texture features calculated on the B-mode images created with different beamforming methods and dynamic range values. We demonstrated the high repeatability of texture features when varying the dynamic range and showed texture features can differentiate between beamforming methods through a MANOVA analysis, hinting at the potential future clinical application of specific beamformers.
Collapse
Affiliation(s)
- Silvia Seoni
- Polito(BIO)Med Lab, Biolab, Dept. of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy.
| | - Giulia Matrone
- Dept. of Electrical, Computer and Biomedical Engineering, University of Pavia, Pavia, Italy
| | - Kristen M Meiburger
- Polito(BIO)Med Lab, Biolab, Dept. of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy
| |
Collapse
|
21
|
Molinier N, Painchaud-April G, Le Duff A, Toews M, Bélanger P. Ultrasonic imaging using conditional generative adversarial networks. ULTRASONICS 2023; 133:107015. [PMID: 37269681 DOI: 10.1016/j.ultras.2023.107015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Revised: 03/17/2023] [Accepted: 04/11/2023] [Indexed: 06/05/2023]
Abstract
The Full Matrix Capture (FMC) and Total Focusing Method (TFM) combination is often considered the gold standard in ultrasonic nondestructive testing, however it may be impractical due to the amount of time required to gather and process the FMC, particularly for high cadence inspections. This study proposes replacing conventional FMC acquisition and TFM processing with a single zero-degree plane wave (PW) insonification and a conditional Generative Adversarial Network (cGAN) trained to produce TFM-like images. Three models with different cGAN architectures and loss formulations were tested in different scenarios. Their performances were compared with conventional TFM computed from FMC. The proposed cGANs were able to recreate TFM-like images with the same resolution while improving the contrast in more than 94% of the reconstructions in comparison with conventional TFM reconstructions. Indeed, thanks to the use of a bias in the cGANs' training, the contrast was systematically increased through a reduction of the background noise level and the elimination of some artifacts. Finally, the proposed method led to a reduction of the computation time and file size by a factor of 120 and 75, respectively.
Collapse
Affiliation(s)
- Nathan Molinier
- PULÉTS, École de Technologie Supérieure (ÉTS), Montréal, H3C 1K3, QC, Canada.
| | | | - Alain Le Duff
- Evident Industrial (formerly Olympus IMS), Québec, G1P 0B3, QC, Canada.
| | - Matthew Toews
- Department of Systems Engineering, École de Technologie Supérieure, Université du Québec, Montréal, H3C 1K3, QC, Canada.
| | - Pierre Bélanger
- PULÉTS, École de Technologie Supérieure (ÉTS), Montréal, H3C 1K3, QC, Canada; Department of Mechanical Engineering, École de Technologie Supérieure, Université du Québec, Montréal, H3C 1K3, QC, Canada.
| |
Collapse
|
22
|
Jin G, Zhu H, Jiang D, Li J, Su L, Li J, Gao F, Cai X. A Signal-Domain Object Segmentation Method for Ultrasound and Photoacoustic Computed Tomography. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:253-265. [PMID: 37015663 DOI: 10.1109/tuffc.2022.3232174] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Image segmentation is important in improving the diagnostic capability of ultrasound computed tomography (USCT) and photoacoustic computed tomography (PACT), as it can be included in the image reconstruction process to improve image quality and quantification abilities. Segmenting the imaged object out of the background using image domain methods is easily complicated by low contrast, noise, and artifacts in the reconstructed image. Here, we introduce a new signal domain object segmentation method for USCT and PACT which does not require image reconstruction beforehand and is automatic, robust, computationally efficient, accurate, and straightforward. We first establish the relationship between the time-of-flight (TOF) of the received first arrival waves and the object's boundary which is described by ellipse equations. Then, we show that the ellipses are tangent to the boundary. By looking for tangent points on the common tangent of neighboring ellipses, the boundary can be approximated with high fidelity. Imaging experiments of human fingers and mice cross sections showed that our method provided equivalent or better segmentations than the optimal ones by active contours. In summary, our method greatly reduces the overall complexity of object segmentation and shows great potential in eliminating user dependency without sacrificing segmentation accuracy. The method can be further seamlessly incorporated into algorithms for other processing purposes in USCT and PACT, such as high-quality image reconstruction.
Collapse
|
23
|
Fouad M, Ghany MAAE, Schmitz G. A Single-Shot Harmonic Imaging Approach Utilizing Deep Learning for Medical Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2023; 70:237-252. [PMID: 37018250 DOI: 10.1109/tuffc.2023.3234230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Tissue harmonic imaging (THI) is an invaluable tool in clinical ultrasound due to its enhanced contrast resolution and reduced reverberation clutter in comparison with fundamental mode imaging. However, harmonic content separation based on high-pass filtering suffers from potential contrast degradation or lower axial resolution due to spectral leakage, whereas nonlinear multipulse harmonic imaging schemes, such as amplitude modulation and pulse inversion, suffer from a reduced frame rate and comparatively higher motion artifacts due to the necessity of at least two pulse echo acquisitions. To address this problem, we propose a deep-learning-based single-shot harmonic imaging technique capable of generating comparable image quality to pulse amplitude modulation methods, yet at a higher frame rate and with fewer motion artifacts. Specifically, an asymmetric convolutional encoder-decoder structure is designed to estimate the combination of the echoes resulting from the half-amplitude transmissions using the echo produced from the full amplitude transmission as input. The echoes were acquired with the checkerboard amplitude modulation technique for training. The model was evaluated across various targets and samples to illustrate generalizability as well as the possibility and impact of transfer learning. Furthermore, for possible interpretability of the network, we investigate if the latent space of the encoder holds information on the nonlinearity parameter of the medium. We demonstrate the ability of the proposed approach to generate harmonic images with a single firing that are comparable to those from a multipulse acquisition.
Collapse
|
24
|
Luijten B, Chennakeshava N, Eldar YC, Mischi M, van Sloun RJG. Ultrasound Signal Processing: From Models to Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:677-698. [PMID: 36635192 DOI: 10.1016/j.ultrasmedbio.2022.11.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 11/02/2022] [Accepted: 11/05/2022] [Indexed: 06/17/2023]
Abstract
Medical ultrasound imaging relies heavily on high-quality signal processing to provide reliable and interpretable image reconstructions. Conventionally, reconstruction algorithms have been derived from physical principles. These algorithms rely on assumptions and approximations of the underlying measurement model, limiting image quality in settings where these assumptions break down. Conversely, more sophisticated solutions based on statistical modeling or careful parameter tuning or derived from increased model complexity can be sensitive to different environments. Recently, deep learning-based methods, which are optimized in a data-driven fashion, have gained popularity. These model-agnostic techniques often rely on generic model structures and require vast training data to converge to a robust solution. A relatively new paradigm combines the power of the two: leveraging data-driven deep learning and exploiting domain knowledge. These model-based solutions yield high robustness and require fewer parameters and training data than conventional neural networks. In this work we provide an overview of these techniques from the recent literature and discuss a wide variety of ultrasound applications. We aim to inspire the reader to perform further research in this area and to address the opportunities within the field of ultrasound signal processing. We conclude with a future perspective on model-based deep learning techniques for medical ultrasound.
Collapse
Affiliation(s)
- Ben Luijten
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands.
| | - Nishith Chennakeshava
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Yonina C Eldar
- Faculty of Math and Computer Science, Weizmann Institute of Science, Rehovot, Israel
| | - Massimo Mischi
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | - Ruud J G van Sloun
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Philips Research, Eindhoven, The Netherlands
| |
Collapse
|
25
|
Peng Y, Yu D, Guo Y. MShNet: Multi-scale feature combined with h-network for medical image segmentation. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
26
|
Ossenkoppele BW, Luijten B, Bera D, de Jong N, Verweij MD, van Sloun RJG. Improving Lateral Resolution in 3-D Imaging With Micro-beamforming Through Adaptive Beamforming by Deep Learning. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:237-255. [PMID: 36253231 DOI: 10.1016/j.ultrasmedbio.2022.08.017] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 07/26/2022] [Accepted: 08/28/2022] [Indexed: 06/16/2023]
Abstract
There is an increased desire for miniature ultrasound probes with small apertures to provide volumetric images at high frame rates for in-body applications. Satisfying these increased requirements makes simultaneous achievement of a good lateral resolution a challenge. As micro-beamforming is often employed to reduce data rate and cable count to acceptable levels, receive processing methods that try to improve spatial resolution will have to compensate the introduced reduction in focusing. Existing beamformers do not realize sufficient improvement and/or have a computational cost that prohibits their use. Here we propose the use of adaptive beamforming by deep learning (ABLE) in combination with training targets generated by a large aperture array, which inherently has better lateral resolution. In addition, we modify ABLE to extend its receptive field across multiple voxels. We illustrate that this method improves lateral resolution both quantitatively and qualitatively, such that image quality is improved compared with that achieved by existing delay-and-sum, coherence factor, filtered-delay-multiplication-and-sum and Eigen-based minimum variance beamformers. We found that only in silica data are required to train the network, making the method easily implementable in practice.
Collapse
Affiliation(s)
| | - Ben Luijten
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands
| | | | - Nico de Jong
- Department of Imaging Physics, Delft University of Technology, Delft, The Netherlands; Department of Cardiology, Erasmus MC Rotterdam, Rotterdam, The Netherlands
| | - Martin D Verweij
- Department of Imaging Physics, Delft University of Technology, Delft, The Netherlands; Department of Cardiology, Erasmus MC Rotterdam, Rotterdam, The Netherlands
| | - Ruud J G van Sloun
- Department of Electrical Engineering, Eindhoven University of Technology, Eindhoven, The Netherlands; Philips Research, Eindhoven, The Netherlands
| |
Collapse
|
27
|
Wiacek A, Oluyemi E, Myers K, Ambinder E, Bell MAL. Coherence Metrics for Reader-Independent Differentiation of Cystic From Solid Breast Masses in Ultrasound Images. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:256-268. [PMID: 36333154 PMCID: PMC9712258 DOI: 10.1016/j.ultrasmedbio.2022.08.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 08/22/2022] [Accepted: 08/28/2022] [Indexed: 06/16/2023]
Abstract
Traditional breast ultrasound imaging is a low-cost, real-time and portable method to assist with breast cancer screening and diagnosis, with particular benefits for patients with dense breast tissue. We previously demonstrated that incorporating coherence-based beamforming additionally improves the distinction of fluid-filled from solid breast masses, based on qualitative image interpretation by board-certified radiologists. However, variable sensitivity (range: 0.71-1.00 when detecting fluid-filled masses) was achieved by the individual radiologist readers. Therefore, we propose two objective coherence metrics, lag-one coherence (LOC) and coherence length (CL), to quantitatively determine the content of breast masses without requiring reader assessment. Data acquired from 31 breast masses were analyzed. Ideal separation (i.e., 1.00 sensitivity and specificity) was achieved between fluid-filled and solid breast masses based on the mean or median LOC value within each mass. When separated based on mean and median CL values, the sensitivity/specificity decreased to 1.00/0.95 and 0.92/0.89, respectively. The greatest sensitivity and specificity were achieved in dense, rather than non-dense, breast tissue. These results support the introduction of an objective, reader-independent method for automated diagnoses of cystic breast masses.
Collapse
Affiliation(s)
- Alycen Wiacek
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, USA.
| | - Eniola Oluyemi
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, Maryland, USA
| | - Kelly Myers
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, Maryland, USA
| | - Emily Ambinder
- Department of Radiology and Radiological Science, Johns Hopkins Medicine, Baltimore, Maryland, USA
| | - Muyinatu A Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, Maryland, USA; Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, USA; Department of Computer Science, Johns Hopkins University, Baltimore, Maryland, USA
| |
Collapse
|
28
|
Wang W, He Q, Zhang Z, Feng Z. Adaptive beamforming based on minimum variance (ABF-MV) using deep neural network for ultrafast ultrasound imaging. ULTRASONICS 2022; 126:106823. [PMID: 35973332 DOI: 10.1016/j.ultras.2022.106823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 06/15/2022] [Accepted: 08/09/2022] [Indexed: 06/15/2023]
Abstract
Ultrafast ultrasound imaging can achieve high frame rate by emitting planewave (PW). However, the image quality is drastically degraded in comparison with traditional scanline focused imaging. Using adaptive beamforming techniques can improve image quality at cost of real-time performance. In this work, an adaptive beamforming based on minimum variance (ABF-MV) with deep neural network (DNN) is proposed to improve the image performance and to speed up the beamforming process of ultrafast ultrasound imaging. In particular, a DNN, with a combination architecture of fully-connected network (FCN) and convolutional autoencoder (CAE), is trained with channel radio-frequency (RF) data as input while minimum variance (MV) beamformed data as ground truth. Conventional delay-and-sum (DAS) beamformer and MV beamformer are utilized for comparison to evaluate the performance of the proposed method with simulations, phantom experiments, and in-vivo experiments. The results show that the proposed method can achieve superior resolution and contrast performance, compared with DAS. Moreover, it is remarkable that both in theoretical analysis and implementation, our proposed method has comparable image quality, lower computational complexity, and faster frame rate, compared with MV. In conclusion, the proposed method has the potential to be deployed in ultrafast ultrasound imaging systems in terms of imaging performance and processing time.
Collapse
Affiliation(s)
- Wenping Wang
- National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Qiong He
- Tsinghua-Peking Joint Center for Life Sciences Department, Tsinghua University, Beijing 100084, China
| | - Ziyou Zhang
- National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu 610065, China
| | - Ziliang Feng
- National Key Laboratory of Fundamental Science on Synthetic Vision, College of Computer Science, Sichuan University, Chengdu 610065, China.
| |
Collapse
|
29
|
Noda T, Azuma T, Ohtake Y, Sakuma I, Tomii N. Ultrasound Imaging With a Flexible Probe Based on Element Array Geometry Estimation Using Deep Neural Network. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:3232-3242. [PMID: 36170409 DOI: 10.1109/tuffc.2022.3210701] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Conventionally, ultrasound (US) diagnosis is performed using hand-held rigid probes. Such devices are difficult to be used for long-term monitoring because they need to be continuously pressed against the body to remove the air between the probe and body. Flexible probes, which can deform and effectively adhere to the body, are a promising technology for long-term monitoring applications. However, owing to the flexible element array geometry, the reconstructed image becomes blurred and distorted. In this study, we propose a flexible probe U.S. imaging method based on element array geometry estimation from radio frequency (RF) data using a deep neural network (DNN). The input and output of the DNN are the RF data and parameters that determine the element array geometry, respectively. The DNN was first trained from scratch with simulation data and then fine-tuned with in vivo data. The DNN performance was evaluated according to the element position mean absolute error (MAE) and the reconstructed image quality. The reconstructed image quality was evaluated with peak-signal-to-noise ratio (PSNR) and mean structural similarity (MSSIM). In the test conducted with simulation data, the average element position MAE was 0.86 mm, and the average reconstructed image PSNR and MSSIM were 20.6 and 0.791, respectively. In the test conducted with in vivo data, the average element position MAE was 1.11 mm, and the average reconstructed image PSNR and MSSIM were 19.4 and 0.798, respectively. The average estimation time was 0.045 s. These results demonstrate the feasibility of the proposed method for long-term real-time monitoring using flexible probes.
Collapse
|
30
|
Shareef B, Vakanski A, Freer PE, Xian M. ESTAN: Enhanced Small Tumor-Aware Network for Breast Ultrasound Image Segmentation. Healthcare (Basel) 2022; 10:2262. [PMID: 36421586 PMCID: PMC9690845 DOI: 10.3390/healthcare10112262] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2022] [Revised: 11/01/2022] [Accepted: 11/03/2022] [Indexed: 11/16/2022] Open
Abstract
Breast tumor segmentation is a critical task in computer-aided diagnosis (CAD) systems for breast cancer detection because accurate tumor size, shape, and location are important for further tumor quantification and classification. However, segmenting small tumors in ultrasound images is challenging due to the speckle noise, varying tumor shapes and sizes among patients, and the existence of tumor-like image regions. Recently, deep learning-based approaches have achieved great success in biomedical image analysis, but current state-of-the-art approaches achieve poor performance for segmenting small breast tumors. In this paper, we propose a novel deep neural network architecture, namely the Enhanced Small Tumor-Aware Network (ESTAN), to accurately and robustly segment breast tumors. The Enhanced Small Tumor-Aware Network introduces two encoders to extract and fuse image context information at different scales, and utilizes row-column-wise kernels to adapt to the breast anatomy. We compare ESTAN and nine state-of-the-art approaches using seven quantitative metrics on three public breast ultrasound datasets, i.e., BUSIS, Dataset B, and BUSI. The results demonstrate that the proposed approach achieves the best overall performance and outperforms all other approaches on small tumor segmentation. Specifically, the Dice similarity coefficient (DSC) of ESTAN on the three datasets is 0.92, 0.82, and 0.78, respectively; and the DSC of ESTAN on the three datasets of small tumors is 0.89, 0.80, and 0.81, respectively.
Collapse
Affiliation(s)
- Bryar Shareef
- Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA
| | - Aleksandar Vakanski
- Department of Industrial Technology, University of Idaho, Idaho Falls, ID 83402, USA
| | - Phoebe E. Freer
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, UT 84132, USA
| | - Min Xian
- Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USA
| |
Collapse
|
31
|
Boice EN, Hernandez Torres SI, Knowlton ZJ, Berard D, Gonzalez JM, Avital G, Snider EJ. Training Ultrasound Image Classification Deep-Learning Algorithms for Pneumothorax Detection Using a Synthetic Tissue Phantom Apparatus. J Imaging 2022; 8:jimaging8090249. [PMID: 36135414 PMCID: PMC9502699 DOI: 10.3390/jimaging8090249] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2022] [Revised: 08/20/2022] [Accepted: 09/07/2022] [Indexed: 11/17/2022] Open
Abstract
Ultrasound (US) imaging is a critical tool in emergency and military medicine because of its portability and immediate nature. However, proper image interpretation requires skill, limiting its utility in remote applications for conditions such as pneumothorax (PTX) which requires rapid intervention. Artificial intelligence has the potential to automate ultrasound image analysis for various pathophysiological conditions. Training models require large data sets and a means of troubleshooting in real-time for ultrasound integration deployment, and they also require large animal models or clinical testing. Here, we detail the development of a dynamic synthetic tissue phantom model for PTX and its use in training image classification algorithms. The model comprises a synthetic gelatin phantom cast in a custom 3D-printed rib mold and a lung mimicking phantom. When compared to PTX images acquired in swine, images from the phantom were similar in both PTX negative and positive mimicking scenarios. We then used a deep learning image classification algorithm, which we previously developed for shrapnel detection, to accurately predict the presence of PTX in swine images by only training on phantom image sets, highlighting the utility for a tissue phantom for AI applications.
Collapse
Affiliation(s)
- Emily N. Boice
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | | | - Zechariah J. Knowlton
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - David Berard
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Jose M. Gonzalez
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Guy Avital
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Trauma & Combat Medicine Branch, Surgeon General’s Headquarters, Israel Defense Forces, Ramat-Gan 52620, Israel
- Division of Anesthesia, Intensive Care & Pain Management, Tel-Aviv Sourasky Medical Center, Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv 64239, Israel
| | - Eric J. Snider
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Correspondence: ; Tel.: +210-539-8721
| |
Collapse
|
32
|
Goudarzi S, Rivaz H. Deep reconstruction of high-quality ultrasound images from raw plane-wave data: A simulation and in vivo study. ULTRASONICS 2022; 125:106778. [PMID: 35728310 DOI: 10.1016/j.ultras.2022.106778] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2021] [Revised: 05/27/2022] [Accepted: 05/28/2022] [Indexed: 06/15/2023]
Abstract
This paper presents a novel beamforming approach based on deep learning to get closer to the ideal Point Spread Function (PSF) in Plane-Wave Imaging (PWI). The proposed approach is designed to reconstruct a high-quality version of Tissue Reflectivity Function (TRF) from echo traces acquired by transducer elements using only a single plane-wave transmission. In this approach, first, a model for the TRF is introduced by setting the imaging PSF as an isotropic (i.e., circularly symmetric) 2D Gaussian kernel convolved with a cosine function. Then, a mapping function between the pre-beamformed Radio-Frequency (RF) channel data and the proposed output is constructed using deep learning. Network architecture contains multi-resolution decomposition and reconstruction using wavelet transform for effective recovery of high-frequency content of the desired output. We exploit step by step training from coarse (mean square error) to fine (ℓ0.2) loss functions. The proposed method is trained on 1174 simulation ultrasound data with the ground-truth echogenicity map extracted from real photographic images. The performance of the trained network is evaluated on the publicly available simulation and in vivo test data without any further fine-tuning. Simulation test results show an improvement of 37.5% and 65.8% in terms of axial and lateral resolution as compared to Delay-And-Sum (DAS) results, respectively. The contrast is also improved by 33.7% in comparison to DAS. Furthermore, the reconstructed in vivo images confirm that the trained mapping function does not need any fine-tuning in the new domain. Therefore, the proposed approach maintains high resolution, contrast, and framerate simultaneously.
Collapse
Affiliation(s)
- Sobhan Goudarzi
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada.
| | - Hassan Rivaz
- Department of Electrical and Computer Engineering, Concordia University, Montreal, QC, Canada
| |
Collapse
|
33
|
Li H, Bhatt M, Qu Z, Zhang S, Hartel MC, Khademhosseini A, Cloutier G. Deep learning in ultrasound elastography imaging: A review. Med Phys 2022; 49:5993-6018. [PMID: 35842833 DOI: 10.1002/mp.15856] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Revised: 02/04/2022] [Accepted: 07/06/2022] [Indexed: 11/11/2022] Open
Abstract
It is known that changes in the mechanical properties of tissues are associated with the onset and progression of certain diseases. Ultrasound elastography is a technique to characterize tissue stiffness using ultrasound imaging either by measuring tissue strain using quasi-static elastography or natural organ pulsation elastography, or by tracing a propagated shear wave induced by a source or a natural vibration using dynamic elastography. In recent years, deep learning has begun to emerge in ultrasound elastography research. In this review, several common deep learning frameworks in the computer vision community, such as multilayer perceptron, convolutional neural network, and recurrent neural network are described. Then, recent advances in ultrasound elastography using such deep learning techniques are revisited in terms of algorithm development and clinical diagnosis. Finally, the current challenges and future developments of deep learning in ultrasound elastography are prospected. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Hongliang Li
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada.,Institute of Biomedical Engineering, University of Montreal, Montréal, Québec, Canada
| | - Manish Bhatt
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada
| | - Zhen Qu
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada
| | - Shiming Zhang
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Martin C Hartel
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Ali Khademhosseini
- California Nanosystems Institute, University of California, Los Angeles, California, USA
| | - Guy Cloutier
- Laboratory of Biorheology and Medical Ultrasonics, University of Montreal Hospital Research Center, Montréal, Québec, Canada.,Institute of Biomedical Engineering, University of Montreal, Montréal, Québec, Canada.,Department of Radiology, Radio-Oncology and Nuclear Medicine, University of Montreal, Montréal, Québec, Canada
| |
Collapse
|
34
|
Di Ianni T, Airan RD. Deep-fUS: A Deep Learning Platform for Functional Ultrasound Imaging of the Brain Using Sparse Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1813-1825. [PMID: 35108201 PMCID: PMC9247015 DOI: 10.1109/tmi.2022.3148728] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Functional ultrasound (fUS) is a rapidly emerging modality that enables whole-brain imaging of neural activity in awake and mobile rodents. To achieve sufficient blood flow sensitivity in the brain microvasculature, fUS relies on long ultrasound data acquisitions at high frame rates, posing high demands on the sampling and processing hardware. Here we develop an image reconstruction method based on deep learning that significantly reduces the amount of data necessary while retaining imaging performance. We trained convolutional neural networks to learn the power Doppler reconstruction function from sparse sequences of ultrasound data with compression factors of up to 95%. High-quality images from in vivo acquisitions in rats were used for training and performance evaluation. We demonstrate that time series of power Doppler images can be reconstructed with sufficient accuracy to detect the small changes in cerebral blood volume (~10%) characteristic of task-evoked cortical activation, even though the network was not formally trained to reconstruct such image series. The proposed platform may facilitate the development of this neuroimaging modality in any setting where dedicated hardware is not available or in clinical scanners.
Collapse
|
35
|
Zhang F, Luo L, Zhang Y, Gao X, Li J. A Convolutional Neural Network for Ultrasound Plane Wave Image Segmentation With a Small Amount of Phase Array Channel Data. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:2270-2281. [PMID: 35552134 DOI: 10.1109/tuffc.2022.3174637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Single-angle plane wave has a huge potential in ultrasound high frame rate imaging, which, however, has a number of difficulties, such as low imaging quality and poor segmentation results. To overcome these difficulties, an end-to-end convolutional neural network (CNN) structure from single-angle channel data was proposed to segment images in this article. The network removed the traditional beamforming process and used raw radio frequency (RF) data as input to directly obtain segmented image. The signal features at each depth were extracted and concatenated to obtain the feature map by a special depth signal extraction module, and the feature map was then put into the residual encoder and decoder to obtain the output. A simulated hypoechoic cysts dataset of 2000 and an actual industrial defect dataset of 900 were used for training separately. Good results have been achieved in both simulated medical cysts segmentation and actual industrial defects segmentation. Experiments were conducted on both datasets with phase array sparse element data as input, and segmentation results were obtained for both. On the whole, this work achieved better quality segmented images with shorter processing time from single-angle plane wave channel data using CNNs; compared with other methods, our network has been greatly improved in intersection over union (IOU), F1 score, and processing time. Also, it indicated that the feasibility of applying deep learning in image segmentation can be improved using phase array sparse element data as input.
Collapse
|
36
|
Perdios D, Vonlanthen M, Martinez F, Arditi M, Thiran JP. CNN-Based Image Reconstruction Method for Ultrafast Ultrasound Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:1154-1168. [PMID: 34847025 DOI: 10.1109/tuffc.2021.3131383] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Ultrafast ultrasound (US) revolutionized biomedical imaging with its capability of acquiring full-view frames at over 1 kHz, unlocking breakthrough modalities such as shear-wave elastography and functional US neuroimaging. Yet, it suffers from strong diffraction artifacts, mainly caused by grating lobes, sidelobes, or edge waves. Multiple acquisitions are typically required to obtain a sufficient image quality, at the cost of a reduced frame rate. To answer the increasing demand for high-quality imaging from single unfocused acquisitions, we propose a two-step convolutional neural network (CNN)-based image reconstruction method, compatible with real-time imaging. A low-quality estimate is obtained by means of a backprojection-based operation, akin to conventional delay-and-sum beamforming, from which a high-quality image is restored using a residual CNN with multiscale and multichannel filtering properties, trained specifically to remove the diffraction artifacts inherent to ultrafast US imaging. To account for both the high dynamic range and the oscillating properties of radio frequency US images, we introduce the mean signed logarithmic absolute error (MSLAE) as a training loss function. Experiments were conducted with a linear transducer array, in single plane-wave (PW) imaging. Trainings were performed on a simulated dataset, crafted to contain a wide diversity of structures and echogenicities. Extensive numerical evaluations demonstrate that the proposed approach can reconstruct images from single PWs with a quality similar to that of gold-standard synthetic aperture imaging, on a dynamic range in excess of 60 dB. In vitro and in vivo experiments show that trainings carried out on simulated data perform well in experimental settings.
Collapse
|
37
|
Lu JY, Lee PY, Huang CC. Improving Image Quality for Single-Angle Plane Wave Ultrasound Imaging With Convolutional Neural Network Beamformer. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:1326-1336. [PMID: 35175918 DOI: 10.1109/tuffc.2022.3152689] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Ultrafast ultrasound imaging based on plane wave (PW) compounding has been proposed for use in various clinical and preclinical applications, including shear wave imaging and super resolution blood flow imaging. Because the image quality afforded by PW imaging is highly dependent on the number of PW angles used for compounding, a tradeoff between image quality and frame rate occurs. In the present study, a convolutional neural network (CNN) beamformer based on a combination of the GoogLeNet and U-Net architectures was developed to replace the conventional delay-and-sum (DAS) algorithm to obtain high-quality images at a high frame rate. RF channel data are used as the inputs for the CNN beamformers. The outputs are in-phase and quadrature data. Simulations and phantom experiments revealed that the images predicted by the CNN beamformers had higher resolution and contrast than those predicted by conventional single-angle PW imaging with the DAS approach. In in vivo studies, the contrast-to-noise ratios (CNRs) of carotid artery images predicted by the CNN beamformers using three or five PWs as ground truths were approximately 12 dB in the transverse view, considerably higher than the CNR obtained using the DAS beamformer (3.9 dB). Most tissue speckle information was retained in the in vivo images produced by the CNN beamformers. In conclusion, only a single PW at 0° was fired, but the quality of the output image was proximal to that of an image generated using three or five PW angles. In other words, the quality-frame rate tradeoff of coherence compounding could be mitigated through the use of the proposed CNN for beamforming.
Collapse
|
38
|
Tierney J, Luchies A, Khan C, Baker J, Brown D, Byram B, Berger M. Training Deep Network Ultrasound Beamformers With Unlabeled In Vivo Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:158-171. [PMID: 34428139 PMCID: PMC8972815 DOI: 10.1109/tmi.2021.3107198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Conventional delay-and-sum (DAS) beamforming is highly efficient but also suffers from various sources of image degradation. Several adaptive beamformers have been proposed to address this problem, including more recently proposed deep learning methods. With deep learning, adaptive beamforming is typically framed as a regression problem, where clean ground-truth physical information is used for training. Because it is difficult to know ground truth information in vivo, training data are usually simulated. However, deep networks trained on simulations can produce suboptimal in vivo image quality because of a domain shift between simulated and in vivo data. In this work, we propose a novel domain adaptation (DA) scheme to correct for domain shift by incorporating unlabeled in vivo data during training. Unlike classification tasks for which both input domains map to the same target domain, a challenge in our regression-based beamforming scenario is that domain shift exists in both the input and target data. To solve this problem, we leverage cycle-consistent generative adversarial networks to map between simulated and in vivo data in both the input and ground truth target domains. Additionally, to account for separate as well as shared features between simulations and in vivo data, we use augmented feature mapping to train domain-specific beamformers. Using various types of training data, we explore the limitations and underlying functionality of the proposed DA approach. Additionally, we compare our proposed approach to several other adaptive beamformers. Using the DA DNN beamformer, consistent in vivo image quality improvements are achieved compared to established techniques.
Collapse
|
39
|
Zhao L, Lediju Bell MA. A Review of Deep Learning Applications in Lung Ultrasound Imaging of COVID-19 Patients. BME FRONTIERS 2022; 2022:9780173. [PMID: 36714302 PMCID: PMC9880989 DOI: 10.34133/2022/9780173] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
The massive and continuous spread of COVID-19 has motivated researchers around the world to intensely explore, understand, and develop new techniques for diagnosis and treatment. Although lung ultrasound imaging is a less established approach when compared to other medical imaging modalities such as X-ray and CT, multiple studies have demonstrated its promise to diagnose COVID-19 patients. At the same time, many deep learning models have been built to improve the diagnostic efficiency of medical imaging. The integration of these initially parallel efforts has led multiple researchers to report deep learning applications in medical imaging of COVID-19 patients, most of which demonstrate the outstanding potential of deep learning to aid in the diagnosis of COVID-19. This invited review is focused on deep learning applications in lung ultrasound imaging of COVID-19 and provides a comprehensive overview of ultrasound systems utilized for data acquisition, associated datasets, deep learning models, and comparative performance.
Collapse
Affiliation(s)
- Lingyi Zhao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA,Department of Computer Science, Johns Hopkins University, Baltimore, USA,Department of Biomedical Engineering, Johns Hopkins University, Baltimore, USA
| |
Collapse
|
40
|
Tang P, Yang X, Nan Y, Xiang S, Liang Q. Feature Pyramid Nonlocal Network With Transform Modal Ensemble Learning for Breast Tumor Segmentation in Ultrasound Images. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:3549-3559. [PMID: 34280097 DOI: 10.1109/tuffc.2021.3098308] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Automated breast ultrasound image segmentation is essential in a computer-aided diagnosis (CAD) system for breast tumors. In this article, we present a feature pyramid nonlocal network (FPNN) with transform modal ensemble learning (TMEL) for accurate breast tumor segmentation in ultrasound images. Specifically, the FPNN fuses multilevel features under special consideration of long-range dependencies by combining the nonlocal module and feature pyramid network. Additionally, the TMEL is introduced to guide two iFPNNs to extract different tumor details. Two publicly available datasets, i.e., the Dataset-Cairo University and Dataset-Merge, were used for evaluation. The proposed FPNN-TMEL achieves a Dice score of 84.70% ± 0.53%, Jaccard Index (Jac) of 78.10% ± 0.48% and Hausdorff distance (HD) of 2.815 ± 0.016 mm on the Dataset-Cairo University, and Dice of 87.00% ± 0.41%, Jac of 79.16% ± 0.56%, and HD of 2.781±0.035 mm on the Dataset-Merge. Qualitative and quantitative experiments show that our method outperforms other state-of-the-art methods for breast tumor segmentation in ultrasound images. Our code is available at https://github.com/pixixiaonaogou/FPNN-TMEL.
Collapse
|
41
|
Hyun D, Wiacek A, Goudarzi S, Rothlubbers S, Asif A, Eickel K, Eldar YC, Huang J, Mischi M, Rivaz H, Sinden D, van Sloun RJG, Strohm H, Bell MAL. Deep Learning for Ultrasound Image Formation: CUBDL Evaluation Framework and Open Datasets. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:3466-3483. [PMID: 34224351 PMCID: PMC8818124 DOI: 10.1109/tuffc.2021.3094849] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Deep learning for ultrasound image formation is rapidly garnering research support and attention, quickly rising as the latest frontier in ultrasound image formation, with much promise to balance both image quality and display speed. Despite this promise, one challenge with identifying optimal solutions is the absence of unified evaluation methods and datasets that are not specific to a single research group. This article introduces the largest known international database of ultrasound channel data and describes the associated evaluation methods that were initially developed for the challenge on ultrasound beamforming with deep learning (CUBDL), which was offered as a component of the 2020 IEEE International Ultrasonics Symposium. We summarize the challenge results and present qualitative and quantitative assessments using both the initially closed CUBDL evaluation test dataset (which was crowd-sourced from multiple groups around the world) and additional in vivo breast ultrasound data contributed after the challenge was completed. As an example quantitative assessment, single plane wave images from the CUBDL Task 1 dataset produced a mean generalized contrast-to-noise ratio (gCNR) of 0.67 and a mean lateral resolution of 0.42 mm when formed with delay-and-sum beamforming, compared with a mean gCNR as high as 0.81 and a mean lateral resolution as low as 0.32 mm when formed with networks submitted by the challenge winners. We also describe contributed CUBDL data that may be used for training of future networks. The compiled database includes a total of 576 image acquisition sequences. We additionally introduce a neural-network-based global sound speed estimator implementation that was necessary to fairly evaluate the results obtained with this international database. The integration of CUBDL evaluation methods, evaluation code, network weights from the challenge winners, and all datasets described herein are publicly available (visit https://cubdl.jhu.edu for details).
Collapse
|
42
|
Huang X, Lediju Bell MA, Ding K. Deep Learning for Ultrasound Beamforming in Flexible Array Transducer. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3178-3189. [PMID: 34101588 PMCID: PMC8609563 DOI: 10.1109/tmi.2021.3087450] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Ultrasound imaging has been developed for image-guided radiotherapy for tumor tracking, and the flexible array transducer is a promising tool for this task. It can reduce the user dependence and anatomical changes caused by the traditional ultrasound transducer. However, due to its flexible geometry, the conventional delay-and-sum (DAS) beamformer may apply incorrect time delay to the radio-frequency (RF) data and produce B-mode images with considerable defocusing and distortion. To address this problem, we propose a novel end-to-end deep learning approach that may alternate the conventional DAS beamformer when the transducer geometry is unknown. Different deep neural networks (DNNs) were designed to learn the proper time delays for each channel, and they were expected to reconstruct the undistorted high-quality B-mode images directly from RF channel data. We compared the DNN results to the standard DAS beamformed results using simulation and flexible array transducer scan data. With the proposed DNN approach, the averaged full-width-at-half-maximum (FWHM) of point scatters is 1.80 mm and 1.31 mm lower in simulation and scan results, respectively; the contrast-to-noise ratio (CNR) of the anechoic cyst in simulation and phantom scan is improved by 0.79 dB and 1.69 dB, respectively; and the aspect ratios of all the cysts are closer to 1. The evaluation results show that the proposed approach can effectively reduce the distortion and improve the lateral resolution and contrast of the reconstructed B-mode images.
Collapse
Affiliation(s)
- Xinyue Huang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Muyinatu A. Lediju Bell
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218 USA, and also with the Department of Biomedical Engineering and the Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218 USA
| | - Kai Ding
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University School of Medicine, Baltimore, MD 21287 USA
| |
Collapse
|
43
|
Sandino CM, Cole EK, Alkan C, Chaudhari AS, Loening AM, Hyun D, Dahl J, Imran AAZ, Wang AS, Vasanawala SS. Upstream Machine Learning in Radiology. Radiol Clin North Am 2021; 59:967-985. [PMID: 34689881 PMCID: PMC8549864 DOI: 10.1016/j.rcl.2021.07.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Machine learning (ML) and Artificial intelligence (AI) has the potential to dramatically improve radiology practice at multiple stages of the imaging pipeline. Most of the attention has been garnered by applications focused on improving the end of the pipeline: image interpretation. However, this article reviews how AI/ML can be applied to improve upstream components of the imaging pipeline, including exam modality selection, hardware design, exam protocol selection, data acquisition, image reconstruction, and image processing. A breadth of applications and their potential for impact is shown across multiple imaging modalities, including ultrasound, computed tomography, and MRI.
Collapse
Affiliation(s)
- Christopher M Sandino
- Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA
| | - Elizabeth K Cole
- Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA
| | - Cagan Alkan
- Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, CA 94305, USA
| | - Akshay S Chaudhari
- Department of Biomedical Data Science, 1201 Welch Road, Stanford, CA 94305, USA; Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Andreas M Loening
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Dongwoon Hyun
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Jeremy Dahl
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | | | - Adam S Wang
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA
| | - Shreyas S Vasanawala
- Department of Radiology, Stanford University, 1201 Welch Road, Stanford, CA 94305, USA.
| |
Collapse
|
44
|
Fouad M, El Ghany MAA, Huebner M, Schmitz G. A Deep Learning Signal-Based Approach to Fast Harmonic Imaging. 2021 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS) 2021. [DOI: 10.1109/ius52206.2021.9593348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
45
|
Tierney J, Luchies A, Berger M, Byram B. Evaluating Input Domain and Model Selection for Deep Network Ultrasound Beamforming. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2370-2385. [PMID: 33684036 PMCID: PMC8285087 DOI: 10.1109/tuffc.2021.3064303] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Improving ultrasound B-mode image quality remains an important area of research. Recently, there has been increased interest in using deep neural networks (DNNs) to perform beamforming to improve image quality more efficiently. Several approaches have been proposed that use different representations of channel data for network processing, including a frequency-domain approach that we previously developed. We previously assumed that the frequency domain would be more robust to varying pulse shapes. However, frequency- and time-domain implementations have not been directly compared. In addition, because our approach operates on aperture domain data as an intermediate beamforming step, a discrepancy often exists between network performance and image quality on fully reconstructed images, making model selection challenging. Here, we perform a systematic comparison of frequency- and time-domain implementations. In addition, we propose a contrast-to-noise ratio (CNR)-based regularization to address previous challenges with model selection. Training channel data were generated from simulated anechoic cysts. Test channel data were generated from simulated anechoic cysts with and without varied pulse shapes, in addition to physical phantom and in vivo data. We demonstrate that simplified time-domain implementations are more robust than we previously assumed, especially when using phase preserving data representations. Specifically, 0.39- and 0.36-dB median improvements in in vivo CNR compared to DAS were achieved with frequency- and time-domain implementations, respectively. We also demonstrate that CNR regularization improves the correlation between training validation loss and simulated CNR by 0.83 and between simulated and in vivo CNR by 0.35 compared to DNNs trained without CNR regularization.
Collapse
|
46
|
Chan DY, Morris DC, Polascik TJ, Palmeri ML, Nightingale KR. Deep Convolutional Neural Networks for Displacement Estimation in ARFI Imaging. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2021; 68:2472-2481. [PMID: 33760733 PMCID: PMC8363049 DOI: 10.1109/tuffc.2021.3068377] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Ultrasound elasticity imaging in soft tissue with acoustic radiation force requires the estimation of displacements, typically on the order of several microns, from serially acquired raw data A-lines. In this work, we implement a fully convolutional neural network (CNN) for ultrasound displacement estimation. We present a novel method for generating ultrasound training data, in which synthetic 3-D displacement volumes with a combination of randomly seeded ellipsoids are created and used to displace scatterers, from which simulated ultrasonic imaging is performed using Field II. Network performance was tested on these virtual displacement volumes, as well as an experimental ARFI phantom data set and a human in vivo prostate ARFI data set. In the simulated data, the proposed neural network performed comparably to Loupas's algorithm, a conventional phase-based displacement estimation algorithm; the rms error was [Formula: see text] for the CNN and 0.73 [Formula: see text] for Loupas. Similarly, in the phantom data, the contrast-to-noise ratio (CNR) of a stiff inclusion was 2.27 for the CNN-estimated image and 2.21 for the Loupas-estimated image. Applying the trained network to in vivo data enabled the visualization of prostate cancer and prostate anatomy. The proposed training method provided 26 000 training cases, which allowed robust network training. The CNN had a computation time that was comparable to Loupas's algorithm; further refinements to the network architecture may provide an improvement in the computation time. We conclude that deep neural network-based displacement estimation from ultrasonic data is feasible, providing comparable performance with respect to both accuracy and speed compared to current standard time-delay estimation approaches.
Collapse
|
47
|
Tang X, Peng J, Zhong B, Li J, Yan Z. Introducing frequency representation into convolution neural networks for medical image segmentation via twin-Kernel Fourier convolution. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106110. [PMID: 33910149 DOI: 10.1016/j.cmpb.2021.106110] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2020] [Accepted: 04/07/2021] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE For medical image segmentation, deep learning-based methods have achieved state-of-the-art performance. However, the powerful spectral representation in the field of image processing is rarely considered in these models. METHODS In this work, we propose to introduce frequency representation into convolution neural networks (CNNs) and design a novel model, tKFC-Net, to combine powerful feature representation in both frequency and spatial domains. Through the Fast Fourier Transform (FFT) operation, frequency representation is employed on pooling, upsampling, and convolution without any adjustments to the network architecture. Furthermore, we replace original convolution with twin-Kernel Fourier Convolution (t-KFC), a new designed convolution layer, to specify the convolution kernels for particular functions and extract features from different frequency components. RESULTS We experimentally show that our method has an edge over other models in the task of medical image segmentation. Evaluated on four datasets-skin lesion segmentation (ISIC 2018), retinal blood vessel segmentation (DRIVE), lung segmentation (COVID-19-CT-Seg), and brain tumor segmentation (BraTS 2019), the proposed model achieves outstanding results: the metric F1-Score is 0.878 for ISIC 2018, 0.8185 for DRIVE, 0.9830 for COVID-19-CT-Seg, and 0.8457 for BraTS 2019. CONCLUSION The introduction of spectral representation retains spectral features which result in more accurate segmentation. The proposed method is orthogonal to other topology improvement methods and very convenient to be combined.
Collapse
Affiliation(s)
- Xianlun Tang
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Jiangping Peng
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Bing Zhong
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Jie Li
- College of Mobile Telecommunications, Chongqing University of Posts and Telecom, Chongqing 401520, China
| | - Zhenfu Yan
- Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
48
|
Zhang J, He Q, Xiao Y, Zheng H, Wang C, Luo J. Ultrasound image reconstruction from plane wave radio-frequency data by self-supervised deep neural network. Med Image Anal 2021; 70:102018. [PMID: 33711740 DOI: 10.1016/j.media.2021.102018] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 01/20/2021] [Accepted: 02/19/2021] [Indexed: 12/19/2022]
Abstract
Image reconstruction from radio-frequency (RF) data is crucial for ultrafast plane wave ultrasound (PWUS) imaging. Compared with the traditional delay-and-sum (DAS) method based on relatively imprecise assumptions, sparse regularization (SR) method directly solves the inverse problem of image reconstruction and has presented significant improvement in the image quality when the frame rate remains high. However, the computational complexity of SR is too high for practical implementation, which is inherently associated with its iterative process. In this work, a deep neural network (DNN), which is trained with an incorporated loss function including sparse regularization terms, is proposed to reconstruct PWUS images from RF data with significantly reduced computational time. It is remarkable that, a self-supervised learning scheme, in which the RF data are utilized as both the inputs and the labels during the training process, is employed to overcome the lack of the "ideal" ultrasound images as the labels for DNN. In addition, it has been also verified that the trained network can be used on the RF data obtained with steered plane waves (PWs), and thus the image quality can be further improved with coherent compounding. Using simulation data, the proposed method has significantly shorter reconstruction time (∼10 ms) than the conventional SR method (∼1-5 mins), with comparable spatial resolution and 1.5-dB higher contrast-to-noise ratio (CNR). Besides, the proposed method with single PW can achieve higher CNR than DAS with 75 PWs in reconstruction of in-vivo images of human carotid arteries.
Collapse
Affiliation(s)
- Jingke Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Qiong He
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China; Tsinghua-Peking Joint Center for Life Sciences Department, Tsinghua University, Beijing 100084, China
| | - Yang Xiao
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Congzhi Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China; National Innovation Center for Advanced Medical Devices, Shenzhen 518055, China.
| | - Jianwen Luo
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China.
| |
Collapse
|