1
|
Zhang Z, Lei Z, Zhou M, Hasegawa H, Gao S. Complex-Valued Convolutional Gated Recurrent Neural Network for Ultrasound Beamforming. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:5668-5679. [PMID: 38598398 DOI: 10.1109/tnnls.2024.3384314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Ultrasound detection is a potent tool for the clinical diagnosis of various diseases due to its real-time, convenient, and noninvasive qualities. Yet, existing ultrasound beamforming and related methods face a big challenge to improve both the quality and speed of imaging for the required clinical applications. The most notable characteristic of ultrasound signal data is its spatial and temporal features. Because most signals are complex-valued, directly processing them by using real-valued networks leads to phase distortion and inaccurate output. In this study, for the first time, we propose a complex-valued convolutional gated recurrent (CCGR) neural network to handle ultrasound analytic signals with the aforementioned properties. The complex-valued network operations proposed in this study improve the beamforming accuracy of complex-valued ultrasound signals over traditional real-valued methods. Further, the proposed deep integration of convolution and recurrent neural networks makes a great contribution to extracting rich and informative ultrasound signal features. Our experimental results reveal its outstanding imaging quality over existing state-of-the-art methods. More significantly, its ultrafast processing speed of only 0.07 s per image promises considerable clinical application potential. The code is available at https://github.com/zhangzm0128/CCGR.
Collapse
|
2
|
Wu R, Li C, Zou J, Liu X, Zheng H, Wang S. Generalizable Reconstruction for Accelerating MR Imaging via Federated Learning With Neural Architecture Search. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:106-117. [PMID: 39037877 DOI: 10.1109/tmi.2024.3432388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/24/2024]
Abstract
Heterogeneous data captured by different scanning devices and imaging protocols can affect the generalization performance of the deep learning magnetic resonance (MR) reconstruction model. While a centralized training model is effective in mitigating this problem, it raises concerns about privacy protection. Federated learning is a distributed training paradigm that can utilize multi-institutional data for collaborative training without sharing data. However, existing federated learning MR image reconstruction methods rely on models designed manually by experts, which are complex and computationally expensive, suffering from performance degradation when facing heterogeneous data distributions. In addition, these methods give inadequate consideration to fairness issues, namely ensuring that the model's training does not introduce bias towards any specific dataset's distribution. To this end, this paper proposes a generalizable federated neural architecture search framework for accelerating MR imaging (GAutoMRI). Specifically, automatic neural architecture search is investigated for effective and efficient neural network representation learning of MR images from different centers. Furthermore, we design a fairness adjustment approach that can enable the model to learn features fairly from inconsistent distributions of different devices and centers, and thus facilitate the model to generalize well to the unseen center. Extensive experiments show that our proposed GAutoMRI has better performances and generalization ability compared with seven state-of-the-art federated learning methods. Moreover, the GAutoMRI model is significantly more lightweight, making it an efficient choice for MR image reconstruction tasks. The code will be made available at https://github.com/ternencewu123/GAutoMRI.
Collapse
|
3
|
Chen X, Xia W, Yang Z, Chen H, Liu Y, Zhou J, Wang Z, Chen Y, Wen B, Zhang Y. SOUL-Net: A Sparse and Low-Rank Unrolling Network for Spectral CT Image Reconstruction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:18620-18634. [PMID: 37792650 DOI: 10.1109/tnnls.2023.3319408] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/06/2023]
Abstract
Spectral computed tomography (CT) is an emerging technology, that generates a multienergy attenuation map for the interior of an object and extends the traditional image volume into a 4-D form. Compared with traditional CT based on energy-integrating detectors, spectral CT can make full use of spectral information, resulting in high resolution and providing accurate material quantification. Numerous model-based iterative reconstruction methods have been proposed for spectral CT reconstruction. However, these methods usually suffer from difficulties such as laborious parameter selection and expensive computational costs. In addition, due to the image similarity of different energy bins, spectral CT usually implies a strong low-rank prior, which has been widely adopted in current iterative reconstruction models. Singular value thresholding (SVT) is an effective algorithm to solve the low-rank constrained model. However, the SVT method requires a manual selection of thresholds, which may lead to suboptimal results. To relieve these problems, in this article, we propose a sparse and low-rank unrolling network (SOUL-Net) for spectral CT image reconstruction, that learns the parameters and thresholds in a data-driven manner. Furthermore, a Taylor expansion-based neural network backpropagation method is introduced to improve the numerical stability. The qualitative and quantitative results demonstrate that the proposed method outperforms several representative state-of-the-art algorithms in terms of detail preservation and artifact reduction.
Collapse
|
4
|
Zhu Y, Fu X, Zhang Z, Liu A, Xiong Z, Zha ZJ. Hue Guidance Network for Single Image Reflection Removal. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:13701-13712. [PMID: 37220051 DOI: 10.1109/tnnls.2023.3270938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Reflection from glasses is ubiquitous in daily life, but it is usually undesirable in photographs. To remove these unwanted noises, existing methods utilize either correlative auxiliary information or handcrafted priors to constrain this ill-posed problem. However, due to their limited capability to describe the properties of reflections, these methods are unable to handle strong and complex reflection scenes. In this article, we propose a hue guidance network (HGNet) with two branches for single image reflection removal (SIRR) by integrating image information and corresponding hue information. The complementarity between image information and hue information has not been noticed. The key to this idea is that we found that hue information can describe reflections well and thus can be used as a superior constraint for the specific SIRR task. Accordingly, the first branch extracts the salient reflection features by directly estimating the hue map. The second branch leverages these effective features, which can help locate salient reflection regions to obtain a high-quality restored image. Furthermore, we design a new cyclic hue loss to provide a more accurate optimization direction for the network training. Experiments substantiate the superiority of our network, especially its excellent generalization ability to various reflection scenes, as compared with state-of-the-arts both qualitatively and quantitatively. Source codes are available at https://github.com/zhuyr97/HGRR.
Collapse
|
5
|
Ma Q, Lai Z, Wang Z, Qiu Y, Zhang H, Qu X. MRI reconstruction with enhanced self-similarity using graph convolutional network. BMC Med Imaging 2024; 24:113. [PMID: 38760778 PMCID: PMC11100064 DOI: 10.1186/s12880-024-01297-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 05/08/2024] [Indexed: 05/19/2024] Open
Abstract
BACKGROUND Recent Convolutional Neural Networks (CNNs) perform low-error reconstruction in fast Magnetic Resonance Imaging (MRI). Most of them convolve the image with kernels and successfully explore the local information. Nonetheless, the non-local image information, which is embedded among image patches relatively far from each other, may be lost due to the limitation of the receptive field of the convolution kernel. We aim to incorporate a graph to represent non-local information and improve the reconstructed images by using the Graph Convolutional Enhanced Self-Similarity (GCESS) network. METHODS First, the image is reconstructed into the graph to extract the non-local self-similarity in the image. Second, GCESS uses spatial convolution and graph convolution to process the information in the image, so that local and non-local information can be effectively utilized. The network strengthens the non-local similarity between similar image patches while reconstructing images, making the reconstruction of structure more reliable. RESULTS Experimental results on in vivo knee and brain data demonstrate that the proposed method achieves better artifact suppression and detail preservation than state-of-the-art methods, both visually and quantitatively. Under 1D Cartesian sampling with 4 × acceleration (AF = 4), the PSNR of knee data reached 34.19 dB, 1.05 dB higher than that of the compared methods; the SSIM achieved 0.8994, 2% higher than the compared methods. Similar results were obtained for the reconstructed images under other sampling templates as demonstrated in our experiment. CONCLUSIONS The proposed method successfully constructs a hybrid graph convolution and spatial convolution network to reconstruct images. This method, through its training process, amplifies the non-local self-similarities, significantly benefiting the structural integrity of the reconstructed images. Experiments demonstrate that the proposed method outperforms the state-of-the-art reconstruction method in suppressing artifacts, as well as in preserving image details.
Collapse
Affiliation(s)
- Qiaoyu Ma
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Zongying Lai
- School of Ocean Information Engineering, Jimei University, Xiamen, China.
| | - Zi Wang
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China
| | - Yiran Qiu
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Haotian Zhang
- School of Ocean Information Engineering, Jimei University, Xiamen, China
| | - Xiaobo Qu
- Department of Electronic Science, Biomedical Intelligent Cloud R&D Center, Fujian Provincial Key Laboratory of Plasma and Magnetic Resonance, National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen, China.
| |
Collapse
|
6
|
Noordman CR, Yakar D, Bosma J, Simonis FFJ, Huisman H. Complexities of deep learning-based undersampled MR image reconstruction. Eur Radiol Exp 2023; 7:58. [PMID: 37789241 PMCID: PMC10547669 DOI: 10.1186/s41747-023-00372-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 08/01/2023] [Indexed: 10/05/2023] Open
Abstract
Artificial intelligence has opened a new path of innovation in magnetic resonance (MR) image reconstruction of undersampled k-space acquisitions. This review offers readers an analysis of the current deep learning-based MR image reconstruction methods. The literature in this field shows exponential growth, both in volume and complexity, as the capabilities of machine learning in solving inverse problems such as image reconstruction are explored. We review the latest developments, aiming to assist researchers and radiologists who are developing new methods or seeking to provide valuable feedback. We shed light on key concepts by exploring the technical intricacies of MR image reconstruction, highlighting the importance of raw datasets and the difficulty of evaluating diagnostic value using standard metrics.Relevance statement Increasingly complex algorithms output reconstructed images that are difficult to assess for robustness and diagnostic quality, necessitating high-quality datasets and collaboration with radiologists.Key points• Deep learning-based image reconstruction algorithms are increasing both in complexity and performance.• The evaluation of reconstructed images may mistake perceived image quality for diagnostic value.• Collaboration with radiologists is crucial for advancing deep learning technology.
Collapse
Affiliation(s)
- Constant Richard Noordman
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands.
| | - Derya Yakar
- Medical Imaging Center, Departments of Radiology, Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen, 9700 RB, The Netherlands
| | - Joeran Bosma
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
| | | | - Henkjan Huisman
- Diagnostic Image Analysis Group, Department of Medical Imaging, Radboud University Medical Center, Nijmegen, 6525 GA, The Netherlands
- Department of Circulation and Medical Imaging, Norwegian University of Science and Technology, Trondheim, 7030, Norway
| |
Collapse
|
7
|
Wang S, Wu R, Li C, Zou J, Zhang Z, Liu Q, Xi Y, Zheng H. PARCEL: Physics-Based Unsupervised Contrastive Representation Learning for Multi-Coil MR Imaging. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2659-2670. [PMID: 36219669 DOI: 10.1109/tcbb.2022.3213669] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
With the successful application of deep learning to magnetic resonance (MR) imaging, parallel imaging techniques based on neural networks have attracted wide attention. However, in the absence of high-quality, fully sampled datasets for training, the performance of these methods is limited. And the interpretability of models is not strong enough. To tackle this issue, this paper proposes a Physics-bAsed unsupeRvised Contrastive rEpresentation Learning (PARCEL) method to speed up parallel MR imaging. Specifically, PARCEL has a parallel framework to contrastively learn two branches of model-based unrolling networks from augmented undersampled multi-coil k-space data. A sophisticated co-training loss with three essential components has been designed to guide the two networks in capturing the inherent features and representations for MR images. And the final MR image is reconstructed with the trained contrastive networks. PARCEL was evaluated on two vivo datasets and compared to five state-of-the-art methods. The results show that PARCEL is able to learn essential representations for accurate MR reconstruction without relying on fully sampled datasets. The code will be made available at https://github.com/ternencewu123/PARCEL.
Collapse
|
8
|
Tong C, Pang Y, Wang Y. HIWDNet: A hybrid image-wavelet domain network for fast magnetic resonance image reconstruction. Comput Biol Med 2022; 151:105947. [PMID: 36334363 DOI: 10.1016/j.compbiomed.2022.105947] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 07/16/2022] [Accepted: 08/06/2022] [Indexed: 12/27/2022]
Abstract
The application of Magnetic Resonance Imaging (MRI) is limited due to the long acquisition time of k-space signals. Recently, many deep learning-based MR image reconstruction methods have been proposed to reduce acquisition time and improve MRI image quality by reconstructing images from under-sampled k-space data. However, these methods suffer from two shortcomings. Firstly, the reconstruction network are mainly designed in the image domain or frequency domain, while ignoring the characteristics of time-frequency features in the wavelet domain. In addition, the existing cross-domain methods design the same reconstruction network in different transform domains, so that the network cannot learn targeted information for different domains. To solve the above problems, we propose a Hybrid Image-Wavelet Domain Reconstruction Network (HIWDNet) for fast MRI reconstruction. Specifically, we employ Cross-scale Dense Feature Fusion Module (CDFFM) in the image domain to reconstruct the basic structure of MR images, while introducing Region Adaptive Artifact Removal Module (RAARM) to remove aliasing artifacts in large areas. Then, a Wavelet Sub-band Reconstruction Module (WSRM) is proposed to refine wavelet sub-bands to improve the accuracy of HIWDNet. The proposed method is evaluated in different sampling modes on the fastMRI dataset, the CC359 dataset and the IXI dataset. Extensive experimental results show that HIWDNet achieves better results on both SSIM and PSNR evaluation metrics compared with other methods.
Collapse
Affiliation(s)
- Chuan Tong
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Yanwei Pang
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| | - Yueze Wang
- TJK-BIIT Lab, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| |
Collapse
|
9
|
Kakigi T, Sakamoto R, Tagawa H, Kuriyama S, Goto Y, Nambu M, Sagawa H, Numamoto H, Miyake KK, Saga T, Matsuda S, Nakamoto Y. Diagnostic advantage of thin slice 2D MRI and multiplanar reconstruction of the knee joint using deep learning based denoising approach. Sci Rep 2022; 12:10362. [PMID: 35725760 PMCID: PMC9209466 DOI: 10.1038/s41598-022-14190-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Accepted: 06/02/2022] [Indexed: 11/16/2022] Open
Abstract
The purpose of this study is to evaluate whether thin-slice high-resolution 2D fat-suppressed proton density-weighted image of the knee joint using denoising approach with deep learning-based reconstruction (dDLR) with MPR is more useful than 3D FS-PD multi planar voxel image. Twelve patients who underwent MRI of the knee at 3T and 13 knees were enrolled. Denoising effect was quantitatively evaluated by comparing the coefficient of variation (CV) before and after dDLR. For the qualitative assessment, two radiologists evaluated image quality, artifacts, anatomical structures, and abnormal findings using a 5-point Likert scale between 2D and 3D. All of them were statistically analyzed. Gwet's agreement coefficients were also calculated. For the scores of abnormal findings, we calculated the percentages of the cases with agreement with high confidence. The CV after dDLR was significantly lower than the one before dDLR (p < 0.05). As for image quality, artifacts and anatomical structure, no significant differences were found except for flow artifact (p < 0.05). The agreement was significantly higher in 2D than in 3D in abnormal findings (p < 0.05). In abnormal findings, the percentage with high confidence was higher in 2D than in 3D (p < 0.05). By applying dDLR to 2D, almost equivalent image quality to 3D could be obtained. Furthermore, abnormal findings could be depicted with greater confidence and consistency, indicating that 2D with dDLR can be a promising imaging method for the knee joint disease evaluation.
Collapse
Affiliation(s)
- Takahide Kakigi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan.
| | - Ryo Sakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
- Preemptive Medicine and Lifestyle-Related Disease Research Center, Kyoto University Hospital, 53 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Hiroshi Tagawa
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Shinichi Kuriyama
- Department of Orthopaedic Surgery, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Yoshihito Goto
- Department of Health Informatics, Kyoto University Graduate School of Medicine/School of Public Health, Yoshida Konoe-cho, Sakyo-ku, Kyoto, 606-8501, Japan
| | - Masahito Nambu
- MRI Systems Division, Canon Medical Systems Corporation, 1385 Shimoishigami, Otawara, Tochigi, 324-8550, Japan
| | - Hajime Sagawa
- Division of Clinical Radiology Service, Kyoto University Hospital, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Hitomi Numamoto
- Department of Advanced Medical Imaging Research, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Kanae Kawai Miyake
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
- Department of Advanced Medical Imaging Research, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Tsuneo Saga
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
- Department of Advanced Medical Imaging Research, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Shuichi Matsuda
- Department of Orthopaedic Surgery, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| | - Yuji Nakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan
| |
Collapse
|