51
|
Liao S, Mo Z, Zeng M, Wu J, Gu Y, Li G, Quan G, Lv Y, Liu L, Yang C, Wang X, Huang X, Zhang Y, Cao W, Dong Y, Wei Y, Zhou Q, Xiao Y, Zhan Y, Zhou XS, Shi F, Shen D. Fast and low-dose medical imaging generation empowered by hybrid deep-learning and iterative reconstruction. Cell Rep Med 2023; 4:101119. [PMID: 37467726 PMCID: PMC10394257 DOI: 10.1016/j.xcrm.2023.101119] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Revised: 05/16/2023] [Accepted: 06/19/2023] [Indexed: 07/21/2023]
Abstract
Fast and low-dose reconstructions of medical images are highly desired in clinical routines. We propose a hybrid deep-learning and iterative reconstruction (hybrid DL-IR) framework and apply it for fast magnetic resonance imaging (MRI), fast positron emission tomography (PET), and low-dose computed tomography (CT) image generation tasks. First, in a retrospective MRI study (6,066 cases), we demonstrate its capability of handling 3- to 10-fold under-sampled MR data, enabling organ-level coverage with only 10- to 100-s scan time; second, a low-dose CT study (142 cases) shows that our framework can successfully alleviate the noise and streak artifacts in scans performed with only 10% radiation dose (0.61 mGy); and last, a fast whole-body PET study (131 cases) allows us to faithfully reconstruct tumor-induced lesions, including small ones (<4 mm), from 2- to 4-fold-accelerated PET acquisition (30-60 s/bp). This study offers a promising avenue for accurate and high-quality image reconstruction with broad clinical value.
Collapse
Affiliation(s)
- Shu Liao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Zhanhao Mo
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun 130033, China
| | - Mengsu Zeng
- Department of Radiology, Shanghai Institute of Medical Imaging, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Jiaojiao Wu
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yuning Gu
- School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China
| | - Guobin Li
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Guotao Quan
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Yang Lv
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Lin Liu
- Department of Radiology, China-Japan Union Hospital of Jilin University, Changchun 130033, China
| | - Chun Yang
- Department of Radiology, Shanghai Institute of Medical Imaging, Zhongshan Hospital, Fudan University, Shanghai 200032, China
| | - Xinglie Wang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Xiaoqian Huang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yang Zhang
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Wenjing Cao
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Yun Dong
- Shanghai United Imaging Healthcare Co., Ltd., Shanghai 201800, China
| | - Ying Wei
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Qing Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yongqin Xiao
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Yiqiang Zhan
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Xiang Sean Zhou
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China.
| | - Dinggang Shen
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai 200232, China; School of Biomedical Engineering, ShanghaiTech University, Shanghai 201210, China; Shanghai Clinical Research and Trial Center, Shanghai 200122, China.
| |
Collapse
|
52
|
Zhou Y, Wang H, Liu C, Liao B, Li Y, Zhu Y, Hu Z, Liao J, Liang D. Recent advances in highly accelerated 3D MRI. Phys Med Biol 2023; 68:14TR01. [PMID: 36863026 DOI: 10.1088/1361-6560/acc0cd] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 03/01/2023] [Indexed: 03/04/2023]
Abstract
Three-dimensional MRI has gained increasing popularity in various clinical applications due to its improved through-plane spatial resolution, which enhances the detection of subtle abnormalities and provides valuable clinical information. However, the long data acquisition time and high computational cost pose significant challenges for 3D MRI. In this comprehensive review article, we aim to summarize the latest advancements in accelerated 3D MR techniques. Covering over 200 remarkable research studies conducted over the past 20 years, we explore the development of MR signal excitation and encoding, advancements in reconstruction algorithms, and potential clinical applications. We hope that this survey serves as a valuable resource, providing insights into the current state of the field and serving as a guide for future research in accelerated 3D MRI.
Collapse
Affiliation(s)
- Yihang Zhou
- Research Centre for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Research Department, Hong Kong Sanatorium and Hospital, Hong Kong SAR, People's Republic of China
| | - Haifeng Wang
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Congcong Liu
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Binyu Liao
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
| | - Ye Li
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Zhangqi Hu
- Department of Neurology, Shenzhen Children's Hospital, Shenzhen, Guangdong, People's Republic of China
| | - Jianxiang Liao
- Department of Neurology, Shenzhen Children's Hospital, Shenzhen, Guangdong, People's Republic of China
| | - Dong Liang
- Research Centre for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Science, Shenzhen, Guangdong, People's Republic of China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| |
Collapse
|
53
|
Fang Z, Lai KW, van Zijl P, Li X, Sulam J. DeepSTI: Towards tensor reconstruction using fewer orientations in susceptibility tensor imaging. Med Image Anal 2023; 87:102829. [PMID: 37146440 PMCID: PMC10288385 DOI: 10.1016/j.media.2023.102829] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Revised: 03/11/2023] [Accepted: 04/18/2023] [Indexed: 05/07/2023]
Abstract
Susceptibility tensor imaging (STI) is an emerging magnetic resonance imaging technique that characterizes the anisotropic tissue magnetic susceptibility with a second-order tensor model. STI has the potential to provide information for both the reconstruction of white matter fiber pathways and detection of myelin changes in the brain at mm resolution or less, which would be of great value for understanding brain structure and function in healthy and diseased brain. However, the application of STI in vivo has been hindered by its cumbersome and time-consuming acquisition requirement of measuring susceptibility induced MR phase changes at multiple head orientations. Usually, sampling at more than six orientations is required to obtain sufficient information for the ill-posed STI dipole inversion. This complexity is enhanced by the limitation in head rotation angles due to physical constraints of the head coil. As a result, STI has not yet been widely applied in human studies in vivo. In this work, we tackle these issues by proposing an image reconstruction algorithm for STI that leverages data-driven priors. Our method, called DeepSTI, learns the data prior implicitly via a deep neural network that approximates the proximal operator of a regularizer function for STI. The dipole inversion problem is then solved iteratively using the learned proximal network. Experimental results using both simulation and in vivo human data demonstrate great improvement over state-of-the-art algorithms in terms of the reconstructed tensor image, principal eigenvector maps and tractography results, while allowing for tensor reconstruction with MR phase measured at much less than six different orientations. Notably, promising reconstruction results are achieved by our method from only one orientation in human in vivo, and we demonstrate a potential application of this technique for estimating lesion susceptibility anisotropy in patients with multiple sclerosis.
Collapse
Affiliation(s)
- Zhenghan Fang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; Johns Hopkins Kavli Neuroscience Discovery Institute, Baltimore, MD 21218, USA
| | - Kuo-Wei Lai
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Peter van Zijl
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD 21205, USA; Department of Radiology and Radiological Sciences, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Xu Li
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, MD 21205, USA; Department of Radiology and Radiological Sciences, Johns Hopkins University, Baltimore, MD 21205, USA.
| | - Jeremias Sulam
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; Johns Hopkins Kavli Neuroscience Discovery Institute, Baltimore, MD 21218, USA.
| |
Collapse
|
54
|
Zhu J, Chen X, Liu Y, Yang B, Wei R, Qin S, Yang Z, Hu Z, Dai J, Men K. Improving accelerated 3D imaging in MRI-guided radiotherapy for prostate cancer using a deep learning method. Radiat Oncol 2023; 18:108. [PMID: 37393282 PMCID: PMC10314402 DOI: 10.1186/s13014-023-02306-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 06/21/2023] [Indexed: 07/03/2023] Open
Abstract
PURPOSE This study was to improve image quality for high-speed MR imaging using a deep learning method for online adaptive radiotherapy in prostate cancer. We then evaluated its benefits on image registration. METHODS Sixty pairs of 1.5 T MR images acquired with an MR-linac were enrolled. The data included low-speed, high-quality (LSHQ), and high-speed low-quality (HSLQ) MR images. We proposed a CycleGAN, which is based on the data augmentation technique, to learn the mapping between the HSLQ and LSHQ images and then generate synthetic LSHQ (synLSHQ) images from the HSLQ images. Five-fold cross-validation was employed to test the CycleGAN model. The normalized mean absolute error (nMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), and edge keeping index (EKI) were calculated to determine image quality. The Jacobian determinant value (JDV), Dice similarity coefficient (DSC), and mean distance to agreement (MDA) were used to analyze deformable registration. RESULTS Compared with the LSHQ, the proposed synLSHQ achieved comparable image quality and reduced imaging time by ~ 66%. Compared with the HSLQ, the synLSHQ had better image quality with improvement of 57%, 3.4%, 26.9%, and 3.6% for nMAE, SSIM, PSNR, and EKI, respectively. Furthermore, the synLSHQ enhanced registration accuracy with a superior mean JDV (6%) and preferable DSC and MDA values compared with HSLQ. CONCLUSION The proposed method can generate high-quality images from high-speed scanning sequences. As a result, it shows potential to shorten the scan time while ensuring the accuracy of radiotherapy.
Collapse
Affiliation(s)
- Ji Zhu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021 China
| | - Xinyuan Chen
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021 China
| | - Yuxiang Liu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021 China
- School of Physics and Technology, Wuhan University, Wuhan, 430072 China
| | - Bining Yang
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021 China
| | - Ran Wei
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021 China
| | - Shirui Qin
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021 China
| | - Zhuanbo Yang
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021 China
| | - Zhihui Hu
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021 China
| | - Jianrong Dai
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021 China
| | - Kuo Men
- National Cancer Center, National Clinical Research Center for Cancer, Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100021 China
| |
Collapse
|
55
|
Bi W, Xv J, Song M, Hao X, Gao D, Qi F. Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction. Front Neurosci 2023; 17:1202143. [PMID: 37409107 PMCID: PMC10318193 DOI: 10.3389/fnins.2023.1202143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Accepted: 06/05/2023] [Indexed: 07/07/2023] Open
Abstract
Introduction Fine-tuning (FT) is a generally adopted transfer learning method for deep learning-based magnetic resonance imaging (MRI) reconstruction. In this approach, the reconstruction model is initialized with pre-trained weights derived from a source domain with ample data and subsequently updated with limited data from the target domain. However, the direct full-weight update strategy can pose the risk of "catastrophic forgetting" and overfitting, hindering its effectiveness. The goal of this study is to develop a zero-weight update transfer strategy to preserve pre-trained generic knowledge and reduce overfitting. Methods Based on the commonality between the source and target domains, we assume a linear transformation relationship of the optimal model weights from the source domain to the target domain. Accordingly, we propose a novel transfer strategy, linear fine-tuning (LFT), which introduces scaling and shifting (SS) factors into the pre-trained model. In contrast to FT, LFT only updates SS factors in the transfer phase, while the pre-trained weights remain fixed. Results To evaluate the proposed LFT, we designed three different transfer scenarios and conducted a comparative analysis of FT, LFT, and other methods at various sampling rates and data volumes. In the transfer scenario between different contrasts, LFT outperforms typical transfer strategies at various sampling rates and considerably reduces artifacts on reconstructed images. In transfer scenarios between different slice directions or anatomical structures, LFT surpasses the FT method, particularly when the target domain contains a decreasing number of training images, with a maximum improvement of up to 2.06 dB (5.89%) in peak signal-to-noise ratio. Discussion The LFT strategy shows great potential to address the issues of "catastrophic forgetting" and overfitting in transfer scenarios for MRI reconstruction, while reducing the reliance on the amount of data in the target domain. Linear fine-tuning is expected to shorten the development cycle of reconstruction models for adapting complicated clinical scenarios, thereby enhancing the clinical applicability of deep MRI reconstruction.
Collapse
Affiliation(s)
- Wanqing Bi
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Jianan Xv
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Mengdie Song
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiaohan Hao
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
- Fuqing Medical Co., Ltd., Hefei, Anhui, China
| | - Dayong Gao
- Department of Mechanical Engineering, University of Washington, Seattle, WA, United States
| | - Fulang Qi
- The Centers for Biomedical Engineering, University of Science and Technology of China, Hefei, Anhui, China
| |
Collapse
|
56
|
Güngör A, Dar SU, Öztürk Ş, Korkmaz Y, Bedel HA, Elmas G, Ozbey M, Çukur T. Adaptive diffusion priors for accelerated MRI reconstruction. Med Image Anal 2023; 88:102872. [PMID: 37384951 DOI: 10.1016/j.media.2023.102872] [Citation(s) in RCA: 49] [Impact Index Per Article: 24.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/13/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance.
Collapse
Affiliation(s)
- Alper Güngör
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; ASELSAN Research Center, Ankara 06200, Turkey
| | - Salman Uh Dar
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Department of Internal Medicine III, Heidelberg University Hospital, Heidelberg 69120, Germany
| | - Şaban Öztürk
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Department of Electrical and Electronics Engineering, Amasya University, Amasya 05100, Turkey
| | - Yilmaz Korkmaz
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Hasan A Bedel
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Gokberk Elmas
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Muzaffer Ozbey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
57
|
Gao Z, Guo Y, Zhang J, Zeng T, Yang G. Hierarchical Perception Adversarial Learning Framework for Compressed Sensing MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:1859-1874. [PMID: 37022266 DOI: 10.1109/tmi.2023.3240862] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
The long acquisition time has limited the accessibility of magnetic resonance imaging (MRI) because it leads to patient discomfort and motion artifacts. Although several MRI techniques have been proposed to reduce the acquisition time, compressed sensing in magnetic resonance imaging (CS-MRI) enables fast acquisition without compromising SNR and resolution. However, existing CS-MRI methods suffer from the challenge of aliasing artifacts. This challenge results in the noise-like textures and missing the fine details, thus leading to unsatisfactory reconstruction performance. To tackle this challenge, we propose a hierarchical perception adversarial learning framework (HP-ALF). HP-ALF can perceive the image information in the hierarchical mechanism: image-level perception and patch-level perception. The former can reduce the visual perception difference in the entire image, and thus achieve aliasing artifact removal. The latter can reduce this difference in the regions of the image, and thus recover fine details. Specifically, HP-ALF achieves the hierarchical mechanism by utilizing multilevel perspective discrimination. This discrimination can provide the information from two perspectives (overall and regional) for adversarial learning. It also utilizes a global and local coherent discriminator to provide structure information to the generator during training. In addition, HP-ALF contains a context-aware learning block to effectively exploit the slice information between individual images for better reconstruction performance. The experiments validated on three datasets demonstrate the effectiveness of HP-ALF and its superiority to the comparative methods.
Collapse
|
58
|
Usui K, Muro I, Shibukawa S, Goto M, Ogawa K, Sakano Y, Kyogoku S, Daida H. Evaluation of motion artefact reduction depending on the artefacts' directions in head MRI using conditional generative adversarial networks. Sci Rep 2023; 13:8526. [PMID: 37237139 DOI: 10.1038/s41598-023-35794-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Accepted: 05/24/2023] [Indexed: 05/28/2023] Open
Abstract
Motion artefacts caused by the patient's body movements affect magnetic resonance imaging (MRI) accuracy. This study aimed to compare and evaluate the accuracy of motion artefacts correction using a conditional generative adversarial network (CGAN) with an autoencoder and U-net models. The training dataset consisted of motion artefacts generated through simulations. Motion artefacts occur in the phase encoding direction, which is set to either the horizontal or vertical direction of the image. To create T2-weighted axial images with simulated motion artefacts, 5500 head images were used in each direction. Of these data, 90% were used for training, while the remainder were used for the evaluation of image quality. Moreover, the validation data used in the model training consisted of 10% of the training dataset. The training data were divided into horizontal and vertical directions of motion artefact appearance, and the effect of combining this data with the training dataset was verified. The resulting corrected images were evaluated using structural image similarity (SSIM) and peak signal-to-noise ratio (PSNR), and the metrics were compared with the images without motion artefacts. The best improvements in the SSIM and PSNR were observed in the consistent condition in the direction of the occurrence of motion artefacts in the training and evaluation datasets. However, SSIM > 0.9 and PSNR > 29 dB were accomplished for the learning model with both image directions. The latter model exhibited the highest robustness for actual patient motion in head MRI images. Moreover, the image quality of the corrected image with the CGAN was the closest to that of the original image, while the improvement rates for SSIM and PSNR were approximately 26% and 7.7%, respectively. The CGAN model demonstrated a high image reproducibility, and the most significant model was the consistent condition of the learning model and the direction of the appearance of motion artefacts.
Collapse
Affiliation(s)
- Keisuke Usui
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan.
| | - Isao Muro
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| | - Syuhei Shibukawa
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| | - Masami Goto
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| | - Koichi Ogawa
- Faculty of Science and Engineering, Hosei University, Tokyo, Japan
| | - Yasuaki Sakano
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| | - Shinsuke Kyogoku
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| | - Hiroyuki Daida
- Department of Radiological Technology, Faculty of Health Science, Juntendo University, Tokyo, Japan
| |
Collapse
|
59
|
Qiao X, Huang Y, Li W. MEDL-Net: A model-based neural network for MRI reconstruction with enhanced deep learned regularizers. Magn Reson Med 2023; 89:2062-2075. [PMID: 36656129 DOI: 10.1002/mrm.29575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 12/09/2022] [Accepted: 12/20/2022] [Indexed: 01/20/2023]
Abstract
PURPOSE To improve the MRI reconstruction performance of model-based networks and to alleviate their large demand for GPU memory. METHODS A model-based neural network with enhanced deep learned regularizers (MEDL-Net) was proposed. The MEDL-Net is separated into several submodules, each of which consists of several cascades to mimic the optimization steps in conventional MRI reconstruction algorithms. Information from shallow cascades is densely connected to latter ones to enrich their inputs in each submodule, and additional revising blocks (RB) are stacked at the end of the submodules to bring more flexibility. Moreover, a composition loss function was designed to explicitly supervise RBs. RESULTS Network performance was evaluated on a publicly available dataset. The MEDL-Net quantitatively outperforms the state-of-the-art methods on different MR image sequences with different acceleration rates (four-fold and six-fold). Moreover, the reconstructed images showed that the detailed textures are better preserved. In addition, fewer cascades are required when achieving the same reconstruction results compared with other model-based networks. CONCLUSION In this study, a more efficient model-based deep network was proposed to reconstruct MR images. The experimental results indicate that the proposed method improves reconstruction performance with fewer cascades, which alleviates the large demand for GPU memory.
Collapse
Affiliation(s)
- Xiaoyu Qiao
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yuping Huang
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Weisheng Li
- Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China
| |
Collapse
|
60
|
Rezaei SR, Ahmadi A. A GAN-based method for 3D lung tumor reconstruction boosted by a knowledge transfer approach. MULTIMEDIA TOOLS AND APPLICATIONS 2023:1-27. [PMID: 37362675 PMCID: PMC10106883 DOI: 10.1007/s11042-023-15232-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Revised: 02/18/2023] [Accepted: 03/30/2023] [Indexed: 06/28/2023]
Abstract
Three-dimensional (3D) image reconstruction of tumors has been one of the most effective techniques for accurately visualizing tumor structures and treatment with high resolution, which requires a set of two-dimensional medical images such as CT slices. In this paper we propose a novel method based on generative adversarial networks (GANs) for 3D lung tumor reconstruction by CT images. The proposed method consists of three stages: lung segmentation, tumor segmentation and 3D lung tumor reconstruction. Lung and tumor segmentation are performed using snake optimization and Gustafson-Kessel (GK) clustering. In the 3D reconstruction part first, features are extracted using the pre-trained VGG model from the tumors that detected in 2D CT slices. Then, a sequence of extracted features is fed into an LSTM to output compressed features. Finally, the compressed feature is used as input for GAN, where the generator is responsible for high-level reconstructing the 3D image of the lung tumor. The main novelty of this paper is the use of GAN to reconstruct a 3D lung tumor model for the first time, to the best of our knowledge. Also, we used knowledge transfer to extract features from 2D images to speed up the training process. The results obtained from the proposed model on the LUNA dataset showed better results than state of the art. According to HD and ED metrics, the proposed method has the lowest values of 3.02 and 1.06, respectively, as compared to those of other methods. The experimental results show that the proposed method performs better than previous similar methods and it is useful to help practitioners in the treatment process.
Collapse
Affiliation(s)
- Seyed Reza Rezaei
- Department of Industrial Engineering and Management Systems, Amirkabir University of Technology, Tehran, Iran
| | - Abbas Ahmadi
- Department of Industrial Engineering and Management Systems, Amirkabir University of Technology, Tehran, Iran
| |
Collapse
|
61
|
Chun IY, Huang Z, Lim H, Fessler JA. Momentum-Net: Fast and Convergent Iterative Neural Network for Inverse Problems. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:4915-4931. [PMID: 32750839 PMCID: PMC8011286 DOI: 10.1109/tpami.2020.3012955] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Iterative neural networks (INN) are rapidly gaining attention for solving inverse problems in imaging, image processing, and computer vision. INNs combine regression NNs and an iterative model-based image reconstruction (MBIR) algorithm, often leading to both good generalization capability and outperforming reconstruction quality over existing MBIR optimization models. This paper proposes the first fast and convergent INN architecture, Momentum-Net, by generalizing a block-wise MBIR algorithm that uses momentum and majorizers with regression NNs. For fast MBIR, Momentum-Net uses momentum terms in extrapolation modules, and noniterative MBIR modules at each iteration by using majorizers, where each iteration of Momentum-Net consists of three core modules: image refining, extrapolation, and MBIR. Momentum-Net guarantees convergence to a fixed-point for general differentiable (non)convex MBIR functions (or data-fit terms) and convex feasible sets, under two asymptomatic conditions. To consider data-fit variations across training and testing samples, we also propose a regularization parameter selection scheme based on the "spectral spread" of majorization matrices. Numerical experiments for light-field photography using a focal stack and sparse-view computational tomography demonstrate that, given identical regression NN architectures, Momentum-Net significantly improves MBIR speed and accuracy over several existing INNs; it significantly improves reconstruction quality compared to a state-of-the-art MBIR method in each application.
Collapse
|
62
|
Ouchi S, Ito S. Efficient complex-valued image reconstruction for compressed sensing MRI using single real-valued convolutional neural network. Magn Reson Imaging 2023; 101:13-24. [PMID: 36965835 DOI: 10.1016/j.mri.2023.03.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 03/19/2023] [Accepted: 03/21/2023] [Indexed: 03/27/2023]
Affiliation(s)
- Shohei Ouchi
- Department of Information and Control Systems Science, Graduate School of Engineering, Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan; Japan Society for the Promotion of Science, Japan.
| | - Satoshi Ito
- Department of Information and Control Systems Science, Graduate School of Engineering, Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan.
| |
Collapse
|
63
|
Luo G, Blumenthal M, Heide M, Uecker M. Bayesian MRI reconstruction with joint uncertainty estimation using diffusion models. Magn Reson Med 2023; 90:295-311. [PMID: 36912453 DOI: 10.1002/mrm.29624] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 02/05/2023] [Accepted: 02/08/2023] [Indexed: 03/14/2023]
Abstract
PURPOSE We introduce a framework that enables efficient sampling from learned probability distributions for MRI reconstruction. METHOD Samples are drawn from the posterior distribution given the measured k-space using the Markov chain Monte Carlo (MCMC) method, different from conventional deep learning-based MRI reconstruction techniques. In addition to the maximum a posteriori estimate for the image, which can be obtained by maximizing the log-likelihood indirectly or directly, the minimum mean square error estimate and uncertainty maps can also be computed from those drawn samples. The data-driven Markov chains are constructed with the score-based generative model learned from a given image database and are independent of the forward operator that is used to model the k-space measurement. RESULTS We numerically investigate the framework from these perspectives: (1) the interpretation of the uncertainty of the image reconstructed from undersampled k-space; (2) the effect of the number of noise scales used to train the generative models; (3) using a burn-in phase in MCMC sampling to reduce computation; (4) the comparison to conventional ℓ 1 $$ {\ell}_1 $$ -wavelet regularized reconstruction; (5) the transferability of learned information; and (6) the comparison to fastMRI challenge. CONCLUSION A framework is described that connects the diffusion process and advanced generative models with Markov chains. We demonstrate its flexibility in terms of contrasts and sampling patterns using advanced generative priors and the benefits of also quantifying the uncertainty for every pixel.
Collapse
Affiliation(s)
- Guanxiong Luo
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Moritz Blumenthal
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany.,Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria
| | - Martin Heide
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany
| | - Martin Uecker
- Institute for Diagnostic and Interventional Radiology, University Medical Center Göttingen, Göttingen, Germany.,Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria.,German Centre for Cardiovascular Research (DZHK) Partner Site Göttingen, Göttingen, Germany.,Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, Göttingen, Germany
| |
Collapse
|
64
|
Satrya GB, Ramatryana INA, Shin SY. Compressive Sensing of Medical Images Based on HSV Color Space. SENSORS (BASEL, SWITZERLAND) 2023; 23:2616. [PMID: 36904821 PMCID: PMC10006955 DOI: 10.3390/s23052616] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 02/06/2023] [Accepted: 02/21/2023] [Indexed: 06/18/2023]
Abstract
Recently, compressive sensing (CS) schemes have been studied as a new compression modality that exploits the sensing matrix in the measurement scheme and the reconstruction scheme to recover the compressed signal. In addition, CS is exploited in medical imaging (MI) to support efficient sampling, compression, transmission, and storage of a large amount of MI. Although CS of MI has been extensively investigated, the effect of color space in CS of MI has not yet been studied in the literature. To fulfill these requirements, this article proposes a novel CS of MI based on hue-saturation value (HSV), using spread spectrum Fourier sampling (SSFS) and sparsity averaging with reweighted analysis (SARA). An HSV loop that performs SSFS is proposed to obtain a compressed signal. Next, HSV-SARA is proposed to reconstruct MI from the compressed signal. A set of color MIs is investigated, such as colonoscopy, magnetic resonance imaging of the brain and eye, and wireless capsule endoscopy images. Experiments were performed to show the superiority of HSV-SARA over benchmark methods in terms of signal-to-noise ratio (SNR), structural similarity (SSIM) index, and measurement rate (MR). The experiments showed that a color MI, with a resolution of 256×256 pixels, could be compressed by the proposed CS at MR of 0.1, and could be improved in terms of SNR being 15.17% and SSIM being 2.53%. The proposed HSV-SARA can be a solution for color medical image compression and sampling to improve the image acquisition of medical devices.
Collapse
Affiliation(s)
| | - I Nyoman Apraz Ramatryana
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Republic of Korea
| | - Soo Young Shin
- Department of IT Convergence Engineering, Kumoh National Institute of Technology, Gumi 39177, Republic of Korea
| |
Collapse
|
65
|
Zhang J, Yi Z, Zhao Y, Xiao L, Hu J, Man C, Lau V, Su S, Chen F, Leong ATL, Wu EX. Calibrationless reconstruction of
uniformly‐undersampled multi‐channel MR
data with deep learning estimated
ESPIRiT
maps. Magn Reson Med 2023; 90:280-294. [PMID: 37119514 DOI: 10.1002/mrm.29625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2022] [Revised: 02/06/2023] [Accepted: 02/08/2023] [Indexed: 03/03/2023]
Abstract
PURPOSE To develop a truly calibrationless reconstruction method that derives An Eigenvalue Approach to Autocalibrating Parallel MRI (ESPIRiT) maps from uniformly-undersampled multi-channel MR data by deep learning. METHODS ESPIRiT, one commonly used parallel imaging reconstruction technique, forms the images from undersampled MR k-space data using ESPIRiT maps that effectively represents coil sensitivity information. Accurate ESPIRiT map estimation requires quality coil sensitivity calibration or autocalibration data. We present a U-Net based deep learning model to estimate the multi-channel ESPIRiT maps directly from uniformly-undersampled multi-channel multi-slice MR data. The model is trained using fully-sampled multi-slice axial brain datasets from the same MR receiving coil system. To utilize subject-coil geometric parameters available for each dataset, the training imposes a hybrid loss on ESPIRiT maps at the original locations as well as their corresponding locations within the standard reference multi-slice axial stack. The performance of the approach was evaluated using publicly available T1-weighed brain and cardiac data. RESULTS The proposed model robustly predicted multi-channel ESPIRiT maps from uniformly-undersampled k-space data. They were highly comparable to the reference ESPIRiT maps directly computed from 24 consecutive central k-space lines. Further, they led to excellent ESPIRiT reconstruction performance even at high acceleration, exhibiting a similar level of errors and artifacts to that by using reference ESPIRiT maps. CONCLUSION A new deep learning approach is developed to estimate ESPIRiT maps directly from uniformly-undersampled MR data. It presents a general strategy for calibrationless parallel imaging reconstruction through learning from the coil and protocol-specific data.
Collapse
Affiliation(s)
- Junhao Zhang
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Zheyuan Yi
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering Southern University of Science and Technology Shenzhen China
| | - Yujiao Zhao
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Linfang Xiao
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Jiahao Hu
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering Southern University of Science and Technology Shenzhen China
| | - Christopher Man
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Vick Lau
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Shi Su
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Fei Chen
- Department of Electrical and Electronic Engineering Southern University of Science and Technology Shenzhen China
| | - Alex T. L. Leong
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| | - Ed X. Wu
- Laboratory of Biomedical Imaging and Signal Processing The University of Hong Kong Hong Kong China
- Department of Electrical and Electronic Engineering The University of Hong Kong Hong Kong China
| |
Collapse
|
66
|
Islam MT, Xing L. Cartography of Genomic Interactions Enables Deep Analysis of Single-Cell Expression Data. Nat Commun 2023; 14:679. [PMID: 36755047 PMCID: PMC9908983 DOI: 10.1038/s41467-023-36383-6] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 01/30/2023] [Indexed: 02/10/2023] Open
Abstract
Remarkable advances in single cell genomics have presented unique challenges and opportunities for interrogating a wealth of biomedical inquiries. High dimensional genomic data are inherently complex because of intertwined relationships among the genes. Existing methods, including emerging deep learning-based approaches, do not consider the underlying biological characteristics during data processing, which greatly compromises the performance of data analysis and hinders the maximal utilization of state-of-the-art genomic techniques. In this work, we develop an entropy-based cartography strategy to contrive the high dimensional gene expression data into a configured image format, referred to as genomap, with explicit integration of the genomic interactions. This unique cartography casts the gene-gene interactions into the spatial configuration of genomaps and enables us to extract the deep genomic interaction features and discover underlying discriminative patterns of the data. We show that, for a wide variety of applications (cell clustering and recognition, gene signature extraction, single cell data integration, cellular trajectory analysis, dimensionality reduction, and visualization), the proposed approach drastically improves the accuracies of data analyses as compared to the state-of-the-art techniques.
Collapse
Affiliation(s)
- Md Tauhidul Islam
- Department of Radiation Oncology, Stanford University, Stanford, California, 94305, USA
| | - Lei Xing
- Department of Radiation Oncology, Stanford University, Stanford, California, 94305, USA.
| |
Collapse
|
67
|
Chen C, Raymond C, Speier W, Jin X, Cloughesy TF, Enzmann D, Ellingson BM, Arnold CW. Synthesizing MR Image Contrast Enhancement Using 3D High-Resolution ConvNets. IEEE Trans Biomed Eng 2023; 70:401-412. [PMID: 35853075 PMCID: PMC9928432 DOI: 10.1109/tbme.2022.3192309] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Gadolinium-based contrast agents (GBCAs) have been widely used to better visualize disease in brain magnetic resonance imaging (MRI). However, gadolinium deposition within the brain and body has raised safety concerns about the use of GBCAs. Therefore, the development of novel approaches that can decrease or even eliminate GBCA exposure while providing similar contrast information would be of significant use clinically. METHODS In this work, we present a deep learning based approach for contrast-enhanced T1 synthesis on brain tumor patients. A 3D high-resolution fully convolutional network (FCN), which maintains high resolution information through processing and aggregates multi-scale information in parallel, is designed to map pre-contrast MRI sequences to contrast-enhanced MRI sequences. Specifically, three pre-contrast MRI sequences, T1, T2 and apparent diffusion coefficient map (ADC), are utilized as inputs and the post-contrast T1 sequences are utilized as target output. To alleviate the data imbalance problem between normal tissues and the tumor regions, we introduce a local loss to improve the contribution of the tumor regions, which leads to better enhancement results on tumors. RESULTS Extensive quantitative and visual assessments are performed, with our proposed model achieving a PSNR of 28.24 dB in the brain and 21.2 dB in tumor regions. CONCLUSION AND SIGNIFICANCE Our results suggest the potential of substituting GBCAs with synthetic contrast images generated via deep learning.
Collapse
|
68
|
Zhao X, Yang T, Li B, Zhang X. SwinGAN: A dual-domain Swin Transformer-based generative adversarial network for MRI reconstruction. Comput Biol Med 2023; 153:106513. [PMID: 36603439 DOI: 10.1016/j.compbiomed.2022.106513] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/09/2022] [Accepted: 12/31/2022] [Indexed: 01/02/2023]
Abstract
Magnetic resonance imaging (MRI) is one of the most important modalities for clinical diagnosis. However, the main disadvantages of MRI are the long scanning time and the moving artifact caused by patient movement during prolonged imaging. It can also lead to patient anxiety and discomfort, so accelerated imaging is indispensable for MRI. Convolutional neural network (CNN) based methods have become the fact standard for medical image reconstruction, and generative adversarial network (GAN) have also been widely used. Nevertheless, due to the limited ability of CNN to capture long-distance information, it may lead to defects in the structure of the reconstructed images such as blurry contour. In this paper, we propose a novel Swin Transformer-based dual-domain generative adversarial network (SwinGAN) for accelerated MRI reconstruction. The SwinGAN consists of two generators: a frequency-domain generator and an image-domain generator. Both the generators utilize Swin Transformer as backbone for effectively capturing the long-distance dependencies. A contextual image relative position encoder (ciRPE) is designed to enhance the ability to capture local information. We extensively evaluate the method on the IXI brain dataset, MICCAI 2013 dataset and MRNet knee dataset. Compared with KIGAN, the peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) are improved by 6.1% and 1.49% to 37.64 dB and 0.98 on IXI dataset respectively, which demonstrates that our model can sufficiently utilize the local and global information of image. The model shows promising performance and robustness under different undersampling masks, different acceleration rates and different datasets. But it needs high hardware requirements with the increasing of the network parameters. The code is available at: https://github.com/learnerzx/SwinGAN.
Collapse
Affiliation(s)
- Xiang Zhao
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| | - Tiejun Yang
- School of Artificial Intelligence and Big Data, Henan University of Technology, Zhengzhou, 450001, China; Key Laboratory of Grain Information Processing and Control (HAUT), Ministry of Education, Zhengzhou, China; Henan Key Laboratory of Grain Photoelectric Detection and Control (HAUT), Zhengzhou, Henan, China.
| | - Bingjie Li
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| | - Xin Zhang
- School of Information Science and Engineering, Henan University of Technology, Zhengzhou, 450001, China
| |
Collapse
|
69
|
Li H, Yang M, Kim JH, Zhang C, Liu R, Huang P, Liang D, Zhang X, Li X, Ying L. SuperMAP: Deep ultrafast MR relaxometry with joint spatiotemporal undersampling. Magn Reson Med 2023; 89:64-76. [PMID: 36128884 PMCID: PMC9617769 DOI: 10.1002/mrm.29411] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 07/19/2022] [Accepted: 07/25/2022] [Indexed: 11/09/2022]
Abstract
PURPOSE To develop an ultrafast and robust MR parameter mapping network using deep learning. THEORY AND METHODS We design a deep learning framework called SuperMAP that directly converts a series of undersampled (both in k-space and parameter-space) parameter-weighted images into several quantitative maps, bypassing the conventional exponential fitting procedure. We also present a novel technique to simultaneously reconstruct T1rho and T2 relaxation maps within a single scan. Full data were acquired and retrospectively undersampled for training and testing using traditional and state-of-the-art techniques for comparison. Prospective data were also collected to evaluate the trained network. The performance of all methods is evaluated using the parameter qualification errors and other metrics in the segmented regions of interest. RESULTS SuperMAP achieved accurate T1rho and T2 mapping with high acceleration factors (R = 24 and R = 32). It exploited both spatial and temporal information and yielded low error (normalized mean square error of 2.7% at R = 24 and 2.8% at R = 32) and high resemblance (structural similarity of 97% at R = 24 and 96% at R = 32) to the gold standard. The network trained with retrospectively undersampled data also works well for the prospective data (with a slightly lower acceleration factor). SuperMAP is also superior to conventional methods. CONCLUSION Our results demonstrate the feasibility of generating superfast MR parameter maps through very few undersampled parameter-weighted images. SuperMAP can simultaneously generate T1rho and T2 relaxation maps in a short scan time.
Collapse
Affiliation(s)
- Hongyu Li
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Mingrui Yang
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, Ohio, USA
| | - Jee Hun Kim
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, Ohio, USA
| | - Chaoyi Zhang
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Ruiying Liu
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Peizhou Huang
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Medical AI research center, SIAT, CAS, Shenzhen, China
| | - Xiaoliang Zhang
- Biomedical Engineering, University at Buffalo, State University at New York, Buffalo, NY, USA
| | - Xiaojuan Li
- Program of Advanced Musculoskeletal Imaging (PAMI), Cleveland Clinic, Cleveland, Ohio, USA
| | - Leslie Ying
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, NY, USA
- Biomedical Engineering, University at Buffalo, State University at New York, Buffalo, NY, USA
| |
Collapse
|
70
|
Hammernik K, Küstner T, Yaman B, Huang Z, Rueckert D, Knoll F, Akçakaya M. Physics-Driven Deep Learning for Computational Magnetic Resonance Imaging: Combining physics and machine learning for improved medical imaging. IEEE SIGNAL PROCESSING MAGAZINE 2023; 40:98-114. [PMID: 37304755 PMCID: PMC10249732 DOI: 10.1109/msp.2022.3215288] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Physics-driven deep learning methods have emerged as a powerful tool for computational magnetic resonance imaging (MRI) problems, pushing reconstruction performance to new limits. This article provides an overview of the recent developments in incorporating physics information into learning-based MRI reconstruction. We consider inverse problems with both linear and non-linear forward models for computational MRI, and review the classical approaches for solving these. We then focus on physics-driven deep learning approaches, covering physics-driven loss functions, plug-and-play methods, generative models, and unrolled networks. We highlight domain-specific challenges such as real- and complex-valued building blocks of neural networks, and translational applications in MRI with linear and non-linear forward models. Finally, we discuss common issues and open challenges, and draw connections to the importance of physics-driven learning when combined with other downstream tasks in the medical imaging pipeline.
Collapse
Affiliation(s)
- Kerstin Hammernik
- Institute of AI and Informatics in Medicine, Technical University of Munich and the Department of Computing, Imperial College London
| | - Thomas Küstner
- Department of Diagnostic and Interventional Radiology, University Hospital of Tuebingen
| | - Burhaneddin Yaman
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, USA
| | - Zhengnan Huang
- Center for Biomedical Imaging, Department of Radiology, New York University
| | - Daniel Rueckert
- Institute of AI and Informatics in Medicine, Technical University of Munich and the Department of Computing, Imperial College London
| | - Florian Knoll
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander University Erlangen
| | - Mehmet Akçakaya
- Department of Electrical and Computer Engineering, and Center for Magnetic Resonance Research, University of Minnesota, USA
| |
Collapse
|
71
|
Gao C, Ghodrati V, Shih SF, Wu HH, Liu Y, Nickel MD, Vahle T, Dale B, Sai V, Felker E, Surawech C, Miao Q, Finn JP, Zhong X, Hu P. Undersampling artifact reduction for free-breathing 3D stack-of-radial MRI based on a deep adversarial learning network. Magn Reson Imaging 2023; 95:70-79. [PMID: 36270417 PMCID: PMC10163826 DOI: 10.1016/j.mri.2022.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 10/06/2022] [Accepted: 10/14/2022] [Indexed: 11/05/2022]
Abstract
PURPOSE Stack-of-radial MRI allows free-breathing abdominal scans, however, it requires relatively long acquisition time. Undersampling reduces scan time but can cause streaking artifacts and degrade image quality. This study developed deep learning networks with adversarial loss and evaluated the performance of reducing streaking artifacts and preserving perceptual image sharpness. METHODS A 3D generative adversarial network (GAN) was developed for reducing streaking artifacts in stack-of-radial abdominal scans. Training and validation datasets were self-gated to 5 respiratory states to reduce motion artifacts and to effectively augment the data. The network used a combination of three loss functions to constrain the anatomy and preserve image quality: adversarial loss, mean-squared-error loss and structural similarity index loss. The performance of the network was investigated for 3-5 times undersampled data from 2 institutions. The performance of the GAN for 5 times accelerated images was compared with a 3D U-Net and evaluated using quantitative NMSE, SSIM and region of interest (ROI) measurements as well as qualitative scores of radiologists. RESULTS The 3D GAN showed similar NMSE (0.0657 vs. 0.0559, p = 0.5217) and significantly higher SSIM (0.841 vs. 0.798, p < 0.0001) compared to U-Net. ROI analysis showed GAN removed streaks in both the background air and the tissue and was not significantly different from the reference mean and variations. Radiologists' scores showed GAN had a significant improvement of 1.6 point (p = 0.004) on a 4-point scale in streaking score while no significant difference in sharpness score compared to the input. CONCLUSION 3D GAN removes streaking artifacts and preserves perceptual image details.
Collapse
Affiliation(s)
- Chang Gao
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Vahid Ghodrati
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Shu-Fu Shih
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Holden H Wu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States; Department of Bioengineering, University of California Los Angeles, Los Angeles, CA, United States
| | - Yongkai Liu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | | | - Thomas Vahle
- MR Application Predevelopment, Siemens Healthcare GmbH, Erlangen, Germany
| | - Brian Dale
- MR R&D Collaborations, Siemens Medical Solutions USA, Inc., Cary, NC, United States
| | - Victor Sai
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States
| | - Ely Felker
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States
| | - Chuthaporn Surawech
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Radiology, Division of Diagnostic Radiology, Faculty of Medicine, Chulalongkorn University and King Chulalongkorn Memorial Hospital, Bangkok, Thailand
| | - Qi Miao
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Department of Radiology, The First Affiliated Hospital of China Medical University, Shenyang, Liaoning Province, China
| | - J Paul Finn
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States
| | - Xiaodong Zhong
- MR R&D Collaborations, Siemens Medical Solutions USA, Inc., Los Angeles, CA, United States
| | - Peng Hu
- Department of Radiological Sciences, University of California Los Angeles, Los Angeles, CA, United States; Inter-Departmental Graduate Program of Physics and Biology in Medicine, University of California Los Angeles, Los Angeles, CA, United States.
| |
Collapse
|
72
|
Abstract
This article provides a focused overview of emerging technology in musculoskeletal MRI and CT. These technological advances have primarily focused on decreasing examination times, obtaining higher quality images, providing more convenient and economical imaging alternatives, and improving patient safety through lower radiation doses. New MRI acceleration methods using deep learning and novel reconstruction algorithms can reduce scanning times while maintaining high image quality. New synthetic techniques are now available that provide multiple tissue contrasts from a limited amount of MRI and CT data. Modern low-field-strength MRI scanners can provide a more convenient and economical imaging alternative in clinical practice, while clinical 7.0-T scanners have the potential to maximize image quality. Three-dimensional MRI curved planar reformation and cinematic rendering can provide improved methods for image representation. Photon-counting detector CT can provide lower radiation doses, higher spatial resolution, greater tissue contrast, and reduced noise in comparison with currently used energy-integrating detector CT scanners. Technological advances have also been made in challenging areas of musculoskeletal imaging, including MR neurography, imaging around metal, and dual-energy CT. While the preliminary results of these emerging technologies have been encouraging, whether they result in higher diagnostic performance requires further investigation.
Collapse
Affiliation(s)
- Richard Kijowski
- From the Department of Radiology, New York University Grossman School of Medicine, 660 First Ave, 3rd Floor, New York, NY 10016
| | - Jan Fritz
- From the Department of Radiology, New York University Grossman School of Medicine, 660 First Ave, 3rd Floor, New York, NY 10016
| |
Collapse
|
73
|
[Deep parallel MRI reconstruction based on a complex-valued loss function]. NAN FANG YI KE DA XUE XUE BAO = JOURNAL OF SOUTHERN MEDICAL UNIVERSITY 2022; 42:1755-1764. [PMID: 36651242 PMCID: PMC9878414 DOI: 10.12122/j.issn.1673-4254.2022.12.02] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
OBJECTIVE To propose a new method for fast MRI reconstruction based on deep learning in parallel MRI data using a new loss function defined as the summation of the mean squared errors of the magnitude and phase. METHODS The multicoil image data were combined into single-coil image data to eliminate the correlation between noises and used as a label in the training process. Considering the importance of the phase information in some applications, where the phase information was lost when combining multicoil data using sum of square method, a new loss function was introduced, defined as the weighted sum of the mean squared error (MSE) of the magnitude and phase. The single weight in the loss function was used to balance the importance of the magnitude and phase in different applications. To validate the proposed method, real brain and knee data in FastMRI dataset were used for training and testing. We also compared this proposed method with two other methods that used MSE or mean absolute error (MAE) as a loss function. RESULTS The experimental results showed that the proposed method was capable of accurate reconstruction of multicoil MR images with significantly reduced artifacts compared with the other two methods. Quantitative analysis showed that the propose method increased the peak signal-to-noise ratio (PSNR) of the reconstructed images by about 1 dB. CONCLUSION The proposed deep MRI reconstruction method using a new loss function to fit the noise in parallel MRI data can accelerate MRI reconstruction and significantly improve the quality of the reconstructed images.
Collapse
|
74
|
Jone PN, Gearhart A, Lei H, Xing F, Nahar J, Lopez-Jimenez F, Diller GP, Marelli A, Wilson L, Saidi A, Cho D, Chang AC. Artificial Intelligence in Congenital Heart Disease: Current State and Prospects. JACC. ADVANCES 2022; 1:100153. [PMID: 38939457 PMCID: PMC11198540 DOI: 10.1016/j.jacadv.2022.100153] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/13/2022] [Revised: 10/04/2022] [Accepted: 10/07/2022] [Indexed: 06/29/2024]
Abstract
The current era of big data offers a wealth of new opportunities for clinicians to leverage artificial intelligence to optimize care for pediatric and adult patients with a congenital heart disease. At present, there is a significant underutilization of artificial intelligence in the clinical setting for the diagnosis, prognosis, and management of congenital heart disease patients. This document is a call to action and will describe the current state of artificial intelligence in congenital heart disease, review challenges, discuss opportunities, and focus on the top priorities of artificial intelligence-based deployment in congenital heart disease.
Collapse
Affiliation(s)
- Pei-Ni Jone
- Section of Pediatric Cardiology, Department of Pediatrics, Lurie Children’s Hospital of Chicago, Northwestern University Feinberg School of Medicine, Chicago, Illinois, USA
| | - Addison Gearhart
- Department of Cardiology, Boston Children's Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Howard Lei
- Division of Pediatric Cardiology, Children’s Hospital of Orange County, Orange, California, USA
| | - Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, Colorado, USA
| | - Jai Nahar
- Department of Cardiology, Children's National Hospital, Washington, DC, USA
| | | | - Gerhard-Paul Diller
- Department of Cardiology III-Adult Congenital and Valvular Heart Disease, University Hospital Muenster, Muenster, Germany
- Adult Congenital Heart Centre and National Centre for Pulmonary Hypertension, Royal Brompton and Harefield National Health Service Foundation Trust, Imperial College London, London, UK
- National Register for Congenital Heart Defects, Berlin, Germany
| | - Ariane Marelli
- McGill Adult Unit for Congenital Heart Disease Excellence, Department of Medicine, McGill University, Montréal, Québec, Canada
| | - Laura Wilson
- Department of Pediatrics, University of Florida-Congenital Heart Center, Gainesville, Florida, USA
| | - Arwa Saidi
- Department of Pediatrics, University of Florida-Congenital Heart Center, Gainesville, Florida, USA
| | - David Cho
- Department of Cardiology, University of California at Los Angeles, Los Angeles, California, USA
| | - Anthony C. Chang
- Division of Pediatric Cardiology, Children’s Hospital of Orange County, Orange, California, USA
| |
Collapse
|
75
|
Nath R, Callahan S, Stoddard M, Amini AA. FlowRAU-Net: Accelerated 4D Flow MRI of Aortic Valvular Flows With a Deep 2D Residual Attention Network. IEEE Trans Biomed Eng 2022; 69:3812-3824. [PMID: 35675233 PMCID: PMC10577002 DOI: 10.1109/tbme.2022.3180691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In this work, we propose a novel deep learning reconstruction framework for rapid and accurate reconstruction of 4D flow MRI data. Reconstruction is performed on a slice-by-slice basis by reducing artifacts in zero-filled reconstructed complex images obtained from undersampled k-space. A deep residual attention network FlowRAU-Net is proposed, trained separately for each encoding direction with 2D complex image slices extracted from complex 4D images at each temporal frame and slice position. The network was trained and tested on 4D flow MRI data of aortic valvular flow in 18 human subjects. Performance of the reconstructions was measured in terms of image quality, 3-D velocity vector accuracy, and accuracy in hemodynamic parameters. Reconstruction performance was measured for three different k-space undersamplings and compared with one state of the art compressed sensing reconstruction method and three deep learning-based reconstruction methods. The proposed method outperforms state of the art methods in all performance measures for all three different k-space undersamplings. Hemodynamic parameters such as blood flow rate and peak velocity from the proposed technique show good agreement with reference flow parameters. Visualization of the reconstructed image and velocity magnitude also shows excellent agreement with the fully sampled reference dataset. Moreover, the proposed method is computationally fast. Total 4D flow data (including all slices in space and time) for a subject can be reconstructed in 69 seconds on a single GPU. Although the proposed method has been applied to 4D flow MRI of aortic valvular flows, given a sufficient number of training samples, it should be applicable to other arterial flows.
Collapse
|
76
|
Zou J, Li C, Jia S, Wu R, Pei T, Zheng H, Wang S. SelfCoLearn: Self-Supervised Collaborative Learning for Accelerating Dynamic MR Imaging. Bioengineering (Basel) 2022; 9:650. [PMID: 36354561 PMCID: PMC9687509 DOI: 10.3390/bioengineering9110650] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Revised: 10/19/2022] [Accepted: 10/26/2022] [Indexed: 08/22/2023] Open
Abstract
Lately, deep learning technology has been extensively investigated for accelerating dynamic magnetic resonance (MR) imaging, with encouraging progresses achieved. However, without fully sampled reference data for training, the current approaches may have limited abilities in recovering fine details or structures. To address this challenge, this paper proposes a self-supervised collaborative learning framework (SelfCoLearn) for accurate dynamic MR image reconstruction from undersampled k-space data directly. The proposed SelfCoLearn is equipped with three important components, namely, dual-network collaborative learning, reunderampling data augmentation and a special-designed co-training loss. The framework is flexible and can be integrated into various model-based iterative un-rolled networks. The proposed method has been evaluated on an in vivo dataset and was compared to four state-of-the-art methods. The results show that the proposed method possesses strong capabilities in capturing essential and inherent representations for direct reconstructions from the undersampled k-space data and thus enables high-quality and fast dynamic MR imaging.
Collapse
Affiliation(s)
- Juan Zou
- School of Physics and Optoelectronics, Xiangtan University, Xiangtan 411105, China
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Sen Jia
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ruoyou Wu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Tingrui Pei
- School of Physics and Optoelectronics, Xiangtan University, Xiangtan 411105, China
- College of Information Science and Technology, Jinan University, Guangzhou 510631, China
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medicial Image Analysis and Application, Shenzhen 518055, China
| |
Collapse
|
77
|
Cui Y, Zhu J, Duan Z, Liao Z, Wang S, Liu W. Artificial Intelligence in Spinal Imaging: Current Status and Future Directions. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:11708. [PMID: 36141981 PMCID: PMC9517575 DOI: 10.3390/ijerph191811708] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Revised: 09/14/2022] [Accepted: 09/15/2022] [Indexed: 06/16/2023]
Abstract
Spinal maladies are among the most common causes of pain and disability worldwide. Imaging represents an important diagnostic procedure in spinal care. Imaging investigations can provide information and insights that are not visible through ordinary visual inspection. Multiscale in vivo interrogation has the potential to improve the assessment and monitoring of pathologies thanks to the convergence of imaging, artificial intelligence (AI), and radiomic techniques. AI is revolutionizing computer vision, autonomous driving, natural language processing, and speech recognition. These revolutionary technologies are already impacting radiology, diagnostics, and other fields, where automated solutions can increase precision and reproducibility. In the first section of this narrative review, we provide a brief explanation of the many approaches currently being developed, with a particular emphasis on those employed in spinal imaging studies. The previously documented uses of AI for challenges involving spinal imaging, including imaging appropriateness and protocoling, image acquisition and reconstruction, image presentation, image interpretation, and quantitative image analysis, are then detailed. Finally, the future applications of AI to imaging of the spine are discussed. AI has the potential to significantly affect every step in spinal imaging. AI can make images of the spine more useful to patients and doctors by improving image quality, imaging efficiency, and diagnostic accuracy.
Collapse
Affiliation(s)
- Yangyang Cui
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Jia Zhu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Zhili Duan
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Zhenhua Liao
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Song Wang
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| | - Weiqiang Liu
- Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
- Department of Mechanical Engineering, Tsinghua University, Beijing 100084, China
- Biomechanics and Biotechnology Lab, Research Institute of Tsinghua University in Shenzhen, Shenzhen 518057, China
| |
Collapse
|
78
|
Oscanoa JA, Middione MJ, Syed AB, Sandino CM, Vasanawala SS, Ennis DB. Accelerated two-dimensional phase-contrast for cardiovascular MRI using deep learning-based reconstruction with complex difference estimation. Magn Reson Med 2022; 89:356-369. [PMID: 36093915 DOI: 10.1002/mrm.29441] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 07/16/2022] [Accepted: 08/11/2022] [Indexed: 11/10/2022]
Abstract
PURPOSE To develop and validate a deep learning-based reconstruction framework for highly accelerated two-dimensional (2D) phase contrast (PC-MRI) data with accurate and precise quantitative measurements. METHODS We propose a modified DL-ESPIRiT reconstruction framework for 2D PC-MRI, comprised of an unrolled neural network architecture with a Complex Difference estimation (CD-DL). CD-DL was trained on 155 fully sampled 2D PC-MRI pediatric clinical datasets. The fully sampled data ( n = 29 $$ n=29 $$ ) was retrospectively undersampled (6-11 × $$ \times $$ ) and reconstructed using CD-DL and a parallel imaging and compressed sensing method (PICS). Measurements of peak velocity and total flow were compared to determine the highest acceleration rate that provided accuracy and precision within ± 5 % $$ \pm 5\% $$ . Feasibility of CD-DL was demonstrated on prospectively undersampled datasets acquired in pediatric clinical patients ( n = 5 $$ n=5 $$ ) and compared to traditional parallel imaging (PI) and PICS. RESULTS The retrospective evaluation showed that 9 × $$ \times $$ accelerated 2D PC-MRI images reconstructed with CD-DL provided accuracy and precision (bias, [95 % $$ \% $$ confidence intervals]) within ± 5 % $$ \pm 5\% $$ . CD-DL showed higher accuracy and precision compared to PICS for measurements of peak velocity (2.8 % $$ \% $$ [ - 2 . 9 $$ -2.9 $$ , 4.5] vs. 3.9 % $$ \% $$ [ - 11 . 0 $$ -11.0 $$ , 4.9]) and total flow (1.8 % $$ \% $$ [ - 3 . 9 $$ -3.9 $$ , 3.4] vs. 2.9 % $$ \% $$ [ - 7 . 1 $$ -7.1 $$ , 6.9]). The prospective feasibility study showed that CD-DL provided higher accuracy and precision than PICS for measurements of peak velocity and total flow. CONCLUSION In a retrospective evaluation, CD-DL produced quantitative measurements of 2D PC-MRI peak velocity and total flow with ≤ 5 % $$ \le 5\% $$ error in both accuracy and precision for up to 9 × $$ \times $$ acceleration. Clinical feasibility was demonstrated using a prospective clinical deployment of our 8 × $$ \times $$ undersampled acquisition and CD-DL reconstruction in a cohort of pediatric patients.
Collapse
Affiliation(s)
- Julio A Oscanoa
- Department of Bioengineering, Stanford University, Stanford, California, USA.,Department of Radiology, Stanford University, Stanford, California, USA
| | | | - Ali B Syed
- Department of Radiology, Stanford University, Stanford, California, USA.,Cardiovascular Institute, Stanford University, Stanford, California, USA
| | - Christopher M Sandino
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | | | - Daniel B Ennis
- Department of Radiology, Stanford University, Stanford, California, USA.,Cardiovascular Institute, Stanford University, Stanford, California, USA
| |
Collapse
|
79
|
Deep learning for compressive sensing: a ubiquitous systems perspective. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10259-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
AbstractCompressive sensing (CS) is a mathematically elegant tool for reducing the sensor sampling rate, potentially bringing context-awareness to a wider range of devices. Nevertheless, practical issues with the sampling and reconstruction algorithms prevent further proliferation of CS in real world domains, especially among heterogeneous ubiquitous devices. Deep learning (DL) naturally complements CS for adapting the sampling matrix, reconstructing the signal, and learning from the compressed samples. While the CS–DL integration has received substantial research interest recently, it has not yet been thoroughly surveyed, nor has any light been shed on practical issues towards bringing the CS–DL to real world implementations in the ubiquitous computing domain. In this paper we identify main possible ways in which CS and DL can interplay, extract key ideas for making CS–DL efficient, outline major trends in the CS–DL research space, and derive guidelines for the future evolution of CS–DL within the ubiquitous computing domain.
Collapse
|
80
|
Yaqub M, Jinchao F, Ahmed S, Arshid K, Bilal MA, Akhter MP, Zia MS. GAN-TL: Generative Adversarial Networks with Transfer Learning for MRI Reconstruction. APPLIED SCIENCES 2022; 12:8841. [DOI: 10.3390/app12178841] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
Generative adversarial networks (GAN), which are fueled by deep learning, are an efficient technique for image reconstruction using under-sampled MR data. In most cases, the performance of a particular model’s reconstruction must be improved by using a substantial proportion of the training data. However, gathering tens of thousands of raw patient data for training the model in actual clinical applications is difficult because retaining k-space data is not customary in the clinical process. Therefore, it is imperative to increase the generalizability of a network that was created using a small number of samples as quickly as possible. This research explored two unique applications based on deep learning-based GAN and transfer learning. Seeing as MRI reconstruction procedures go for brain and knee imaging, the proposed method outperforms current techniques in terms of signal-to-noise ratio (PSNR) and structural similarity index (SSIM). As compared to the results of transfer learning for the brain and knee, using a smaller number of training cases produced superior results, with acceleration factor (AF) 2 (for brain PSNR (39.33); SSIM (0.97), for knee PSNR (35.48); SSIM (0.90)) and AF 4 (for brain PSNR (38.13); SSIM (0.95), for knee PSNR (33.95); SSIM (0.86)). The approach that has been described would make it easier to apply future models for MRI reconstruction without necessitating the acquisition of vast imaging datasets.
Collapse
Affiliation(s)
- Muhammad Yaqub
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Feng Jinchao
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Shahzad Ahmed
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Kaleem Arshid
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Muhammad Atif Bilal
- Riphah College of Computing, Faisalabad Campus, Riphah International University, Islamabad 38000, Pakistan
- College of Geoexploration Science and Technology, Jilin University, Changchun 130061, China
| | - Muhammad Pervez Akhter
- Riphah College of Computing, Faisalabad Campus, Riphah International University, Islamabad 38000, Pakistan
| | - Muhammad Sultan Zia
- Department of Computer Science, The University of Chenab, Gujranwala 50250, Pakistan
| |
Collapse
|
81
|
Aghabiglou A, Eksioglu EM. Deep unfolding architecture for MRI reconstruction enhanced by adaptive noise maps. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
82
|
Liu L, Shen L, Johansson A, Balter JM, Cao Y, Chang D, Xing IL. Real time volumetric MRI for 3D motion tracking via geometry-informed deep learning. Med Phys 2022; 49:6110-6119. [PMID: 35766221 PMCID: PMC10323755 DOI: 10.1002/mp.15822] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 04/26/2022] [Accepted: 06/02/2022] [Indexed: 11/07/2022] Open
Abstract
PURPOSE To develop a geometry-informed deep learning framework for volumetric MRI with sub-second acquisition time in support of 3D motion tracking, which is highly desirable for improved radiotherapy precision but hindered by the long image acquisition time. METHODS A 2D-3D deep learning network with an explicitly defined geometry module that embeds geometric priors of the k-space encoding pattern was investigated, where a 2D generation network first augmented the sparsely sampled image dataset by generating new 2D representations of the underlying 3D subject. A geometry module then unfolded the 2D representations to the volumetric space. Finally, a 3D refinement network took the unfolded 3D data and outputted high-resolution volumetric images. Patient-specific models were trained for seven abdominal patients to reconstruct volumetric MRI from both orthogonal cine slices and sparse radial samples. To evaluate the robustness of the proposed method to longitudinal patient anatomy and position changes, we tested the trained model on separate datasets acquired more than one month later and evaluated 3D target motion tracking accuracy using the model-reconstructed images by deforming a reference MRI with gross tumor volume (GTV) contours to a 5-min time series of both ground truth and model-reconstructed volumetric images with a temporal resolution of 340 ms. RESULTS Across the seven patients evaluated, the median distances between model-predicted and ground truth GTV centroids in the superior-inferior direction were 0.4 ± 0.3 mm and 0.5 ± 0.4 mm for cine and radial acquisitions, respectively. The 95-percentile Hausdorff distances between model-predicted and ground truth GTV contours were 4.7 ± 1.1 mm and 3.2 ± 1.5 mm for cine and radial acquisitions, which are of the same scale as cross-plane image resolution. CONCLUSION Incorporating geometric priors into deep learning model enables volumetric imaging with high spatial and temporal resolution, which is particularly valuable for 3D motion tracking and has the potential of greatly improving MRI-guided radiotherapy precision.
Collapse
Affiliation(s)
- Lianli Liu
- Department of Radiation Oncology, Stanford University, Palo Alto, California, USA
| | - Liyue Shen
- Department of Electrical Engineering, Stanford University, Palo Alto, California, USA
| | - Adam Johansson
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan, USA
- Department of Immunology Genetics and pathology, Uppsala University, Uppsala, Sweden
- Department of Surgical Sciences, Uppsala University, Uppsala, Sweden
| | - James M. Balter
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan, USA
| | - Yue Cao
- Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan, USA
| | - Daniel Chang
- Department of Radiation Oncology, Stanford University, Palo Alto, California, USA
| | - I Lei Xing
- Department of Radiation Oncology, Stanford University, Palo Alto, California, USA
- Department of Electrical Engineering, Stanford University, Palo Alto, California, USA
| |
Collapse
|
83
|
Wang G, Luo T, Nielsen JF, Noll DC, Fessler JA. B-Spline Parameterized Joint Optimization of Reconstruction and K-Space Trajectories (BJORK) for Accelerated 2D MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2318-2330. [PMID: 35320096 PMCID: PMC9437126 DOI: 10.1109/tmi.2022.3161875] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
Optimizing k-space sampling trajectories is a promising yet challenging topic for fast magnetic resonance imaging (MRI). This work proposes to optimize a reconstruction method and sampling trajectories jointly concerning image reconstruction quality in a supervised learning manner. We parameterize trajectories with quadratic B-spline kernels to reduce the number of parameters and apply multi-scale optimization, which may help to avoid sub-optimal local minima. The algorithm includes an efficient non-Cartesian unrolled neural network-based reconstruction and an accurate approximation for backpropagation through the non-uniform fast Fourier transform (NUFFT) operator to accurately reconstruct and back-propagate multi-coil non-Cartesian data. Penalties on slew rate and gradient amplitude enforce hardware constraints. Sampling and reconstruction are trained jointly using large public datasets. To correct for possible eddy-current effects introduced by the curved trajectory, we use a pencil-beam trajectory mapping technique. In both simulations and in- vivo experiments, the learned trajectory demonstrates significantly improved image quality compared to previous model-based and learning-based trajectory optimization methods for 10× acceleration factors. Though trained with neural network-based reconstruction, the proposed trajectory also leads to improved image quality with compressed sensing-based reconstruction.
Collapse
|
84
|
DIIK-Net: A Full-resolution Cross-domain Deep Interaction Convolutional Neural Network for MR Image Reconstruction. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.09.048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
85
|
Liu X, Pang Y, Jin R, Liu Y, Wang Z. Dual-Domain Reconstruction Network with V-Net and K-Net for Fast MRI. Magn Reson Med 2022; 88:2694-2708. [PMID: 35942977 DOI: 10.1002/mrm.29400] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 07/05/2022] [Accepted: 07/08/2022] [Indexed: 11/10/2022]
Abstract
PURPOSE To introduce a dual-domain reconstruction network with V-Net and K-Net for accurate MR image reconstruction from undersampled k-space data. METHODS Most state-of-the-art reconstruction methods apply U-Net or cascaded U-Nets in the image domain and/or k-space domain. Nevertheless, these methods have the following problems: (1) directly applying U-Net in the k-space domain is not optimal for extracting features; (2) classical image-domain-oriented U-Net is heavyweighted and hence inefficient when cascaded many times to yield good reconstruction accuracy; (3) classical image-domain-oriented U-Net does not make full use of information of the encoder network for extracting features in the decoder network; and (4) existing methods are ineffective in simultaneously extracting and fusing features in the image domain and its dual k-space domain. To tackle these problems, we present 3 different methods: (1) V-Net, an image-domain encoder-decoder subnetwork that is more lightweight for cascading and effective in fully utilizing features in the encoder for decoding; (2) K-Net, a k-space domain subnetwork that is more suitable for extracting hierarchical features in the k-space domain, and (3) KV-Net, a dual-domain reconstruction network in which V-Nets and K-Nets are effectively combined and cascaded. RESULTS Extensive experimental results on the fastMRI dataset demonstrate that the proposed KV-Net can reconstruct high-quality images and outperform state-of-the-art approaches with fewer parameters. CONCLUSIONS To reconstruct images effectively and efficiently from incomplete k-space data, we have presented a dual-domain KV-Net to combine K-Nets and V-Nets. The KV-Net achieves better results with 9% and 5% parameters than comparable methods (XPD-Net and i-RIM).
Collapse
Affiliation(s)
- Xiaohan Liu
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Yanwei Pang
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Ruiqi Jin
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Yu Liu
- Tianjin Key Lab. of Brain Inspired Intelligence Technology, School of Electrical and Information Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Zhenchang Wang
- Beijing Friendship Hospital, Capital Medical University, Beijing, People's Republic of China
| |
Collapse
|
86
|
Chen EZ, Wang P, Chen X, Chen T, Sun S. Pyramid Convolutional RNN for MRI Image Reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2033-2047. [PMID: 35192462 DOI: 10.1109/tmi.2022.3153849] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Fast and accurate MRI image reconstruction from undersampled data is crucial in clinical practice. Deep learning based reconstruction methods have shown promising advances in recent years. However, recovering fine details from undersampled data is still challenging. In this paper, we introduce a novel deep learning based method, Pyramid Convolutional RNN (PC-RNN), to reconstruct images from multiple scales. Based on the formulation of MRI reconstruction as an inverse problem, we design the PC-RNN model with three convolutional RNN (ConvRNN) modules to iteratively learn the features in multiple scales. Each ConvRNN module reconstructs images at different scales and the reconstructed images are combined by a final CNN module in a pyramid fashion. The multi-scale ConvRNN modules learn a coarse-to-fine image reconstruction. Unlike other common reconstruction methods for parallel imaging, PC-RNN does not employ coil sensitive maps for multi-coil data and directly model the multiple coils as multi-channel inputs. The coil compression technique is applied to standardize data with various coil numbers, leading to more efficient training. We evaluate our model on the fastMRI knee and brain datasets and the results show that the proposed model outperforms other methods and can recover more details. The proposed method is one of the winner solutions in the 2019 fastMRI competition.
Collapse
|
87
|
Beauferris Y, Teuwen J, Karkalousos D, Moriakov N, Caan M, Yiasemis G, Rodrigues L, Lopes A, Pedrini H, Rittner L, Dannecker M, Studenyak V, Gröger F, Vyas D, Faghih-Roohi S, Kumar Jethi A, Chandra Raju J, Sivaprakasam M, Lasby M, Nogovitsyn N, Loos W, Frayne R, Souza R. Multi-Coil MRI Reconstruction Challenge-Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Front Neurosci 2022; 16:919186. [PMID: 35873808 PMCID: PMC9298878 DOI: 10.3389/fnins.2022.919186] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 06/01/2022] [Indexed: 11/13/2022] Open
Abstract
Deep-learning-based brain magnetic resonance imaging (MRI) reconstruction methods have the potential to accelerate the MRI acquisition process. Nevertheless, the scientific community lacks appropriate benchmarks to assess the MRI reconstruction quality of high-resolution brain images, and evaluate how these proposed algorithms will behave in the presence of small, but expected data distribution shifts. The multi-coil MRI (MC-MRI) reconstruction challenge provides a benchmark that aims at addressing these issues, using a large dataset of high-resolution, three-dimensional, T1-weighted MRI scans. The challenge has two primary goals: (1) to compare different MRI reconstruction models on this dataset and (2) to assess the generalizability of these models to data acquired with a different number of receiver coils. In this paper, we describe the challenge experimental design and summarize the results of a set of baseline and state-of-the-art brain MRI reconstruction models. We provide relevant comparative information on the current MRI reconstruction state-of-the-art and highlight the challenges of obtaining generalizable models that are required prior to broader clinical adoption. The MC-MRI benchmark data, evaluation code, and current challenge leaderboard are publicly available. They provide an objective performance assessment for future developments in the field of brain MRI reconstruction.
Collapse
Affiliation(s)
- Youssef Beauferris
- (AI) Lab, Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, Netherlands
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
- Innovation Centre for Artificial Intelligence – Artificial Intelligence for Oncology, University of Amsterdam, Amsterdam, Netherlands
| | - Dimitrios Karkalousos
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Centre, University of Amsterdam, Amsterdam, Netherlands
| | - Nikita Moriakov
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, Netherlands
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Matthan Caan
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Centre, University of Amsterdam, Amsterdam, Netherlands
| | - George Yiasemis
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
- Innovation Centre for Artificial Intelligence – Artificial Intelligence for Oncology, University of Amsterdam, Amsterdam, Netherlands
| | - Lívia Rodrigues
- Medical Image Computing Lab, School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | - Alexandre Lopes
- Institute of Computing, University of Campinas, Campinas, Brazil
| | - Helio Pedrini
- Institute of Computing, University of Campinas, Campinas, Brazil
| | - Letícia Rittner
- Medical Image Computing Lab, School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | - Maik Dannecker
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Viktor Studenyak
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Fabian Gröger
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Devendra Vyas
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | | | - Amrit Kumar Jethi
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
| | - Jaya Chandra Raju
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
| | - Mohanasankar Sivaprakasam
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
- Healthcare Technology Innovation Centre, Indian Institute of Technology Madras, Chennai, India
| | - Mike Lasby
- (AI) Lab, Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Nikita Nogovitsyn
- Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada
- Mood Disorders Program, Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada
| | - Wallace Loos
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Radiology and Clinical Neurosciences, University of Calgary, Calgary, AB, Canada
- Seaman Family MR Research Centre, Foothills Medical Center, Calgary, AB, Canada
| | - Richard Frayne
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Radiology and Clinical Neurosciences, University of Calgary, Calgary, AB, Canada
- Seaman Family MR Research Centre, Foothills Medical Center, Calgary, AB, Canada
| | - Roberto Souza
- (AI) Lab, Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
88
|
Korkmaz Y, Dar SUH, Yurt M, Ozbey M, Cukur T. Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1747-1763. [PMID: 35085076 DOI: 10.1109/tmi.2022.3147426] [Citation(s) in RCA: 89] [Impact Index Per Article: 29.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Supervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.
Collapse
|
89
|
Pan B, Qi N, Meng Q, Wang J, Peng S, Qi C, Gong NJ, Zhao J. Ultra high speed SPECT bone imaging enabled by a deep learning enhancement method: a proof of concept. EJNMMI Phys 2022; 9:43. [PMID: 35698006 PMCID: PMC9192886 DOI: 10.1186/s40658-022-00472-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Accepted: 05/29/2022] [Indexed: 11/12/2022] Open
Abstract
Background To generate high-quality bone scan SPECT images from only 1/7 scan time SPECT images using deep learning-based enhancement method. Materials and methods Normal-dose (925–1110 MBq) clinical technetium 99 m-methyl diphosphonate (99mTc-MDP) SPECT/CT images and corresponding SPECT/CT images with 1/7 scan time from 20 adult patients with bone disease and a phantom were collected to develop a lesion-attention weighted U2-Net (Qin et al. in Pattern Recognit 106:107404, 2020), which produces high-quality SPECT images from fast SPECT/CT images. The quality of synthesized SPECT images from different deep learning models was compared using PSNR and SSIM. Clinic evaluation on 5-point Likert scale (5 = excellent) was performed by two experienced nuclear physicians. Average score and Wilcoxon test were constructed to assess the image quality of 1/7 SPECT, DL-enhanced SPECT and the standard SPECT. SUVmax, SUVmean, SSIM and PSNR from each detectable sphere filled with imaging agent were measured and compared for different images. Results U2-Net-based model reached the best PSNR (40.8) and SSIM (0.788) performance compared with other advanced deep learning methods. The clinic evaluation showed the quality of the synthesized SPECT images is much higher than that of fast SPECT images (P < 0.05). Compared to the standard SPECT images, enhanced images exhibited the same general image quality (P > 0.999), similar detail of 99mTc-MDP (P = 0.125) and the same diagnostic confidence (P = 0.1875). 4, 5 and 6 spheres could be distinguished on 1/7 SPECT, DL-enhanced SPECT and the standard SPECT, respectively. The DL-enhanced phantom image outperformed 1/7 SPECT in SUVmax, SUVmean, SSIM and PSNR in quantitative assessment. Conclusions Our proposed method can yield significant image quality improvement in the noise level, details of anatomical structure and SUV accuracy, which enabled applications of ultra fast SPECT bone imaging in real clinic settings.
Collapse
Affiliation(s)
- Boyang Pan
- RadioDynamic Healthcare, Shanghai, China
| | - Na Qi
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China
| | - Qingyuan Meng
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China
| | | | - Siyue Peng
- RadioDynamic Healthcare, Shanghai, China
| | | | - Nan-Jie Gong
- Vector Lab for Intelligent Medical Imaging and Neural Engineering, International Innovation Center of Tsinghua University, No. 602 Tongpu Street, Putuo District, Shanghai, China.
| | - Jun Zhao
- Department of Nuclear Medicine, Shanghai East Hospital, Tongji University School of Medicine, No. 150 Jimo Road, Pudong New District, Shanghai, China.
| |
Collapse
|
90
|
Shen L, Pauly J, Xing L. NeRP: Implicit Neural Representation Learning With Prior Embedding for Sparsely Sampled Image Reconstruction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; PP:770-782. [PMID: 35657845 PMCID: PMC10889906 DOI: 10.1109/tnnls.2022.3177134] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Image reconstruction is an inverse problem that solves for a computational image based on sampled sensor measurement. Sparsely sampled image reconstruction poses additional challenges due to limited measurements. In this work, we propose a methodology of implicit Neural Representation learning with Prior embedding (NeRP) to reconstruct a computational image from sparsely sampled measurements. The method differs fundamentally from previous deep learning-based image reconstruction approaches in that NeRP exploits the internal information in an image prior and the physics of the sparsely sampled measurements to produce a representation of the unknown subject. No large-scale data is required to train the NeRP except for a prior image and sparsely sampled measurements. In addition, we demonstrate that NeRP is a general methodology that generalizes to different imaging modalities such as computed tomography (CT) and magnetic resonance imaging (MRI). We also show that NeRP can robustly capture the subtle yet significant image changes required for assessing tumor progression.
Collapse
|
91
|
Zhang C, Moeller S, Demirel OB, Uğurbil K, Akçakaya M. Residual RAKI: A hybrid linear and non-linear approach for scan-specific k-space deep learning. Neuroimage 2022; 256:119248. [PMID: 35487456 PMCID: PMC9179026 DOI: 10.1016/j.neuroimage.2022.119248] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 04/07/2022] [Accepted: 04/23/2022] [Indexed: 10/31/2022] Open
Abstract
Parallel imaging is the most clinically used acceleration technique for magnetic resonance imaging (MRI) in part due to its easy inclusion into routine acquisitions. In k-space based parallel imaging reconstruction, sub-sampled k-space data are interpolated using linear convolutions. At high acceleration rates these methods have inherent noise amplification and reduced image quality. On the other hand, non-linear deep learning methods provide improved image quality at high acceleration, but the availability of training databases for different scans, as well as their interpretability hinder their adaptation. In this work, we present an extension of Robust Artificial-neural-networks for k-space Interpolation (RAKI), called residual-RAKI (rRAKI), which achieves scan-specific machine learning reconstruction using a hybrid linear and non-linear methodology. In rRAKI, non-linear CNNs are trained jointly with a linear convolution implemented via a skip connection. In effect, the linear part provides a baseline reconstruction, while the non-linear CNN that runs in parallel provides further reduction of artifacts and noise arising from the linear part. The explicit split between the linear and non-linear aspects of the reconstruction also help improve interpretability compared to purely non-linear methods. Experiments were conducted on the publicly available fastMRI datasets, as well as high-resolution anatomical imaging, comparing GRAPPA and its variants, compressed sensing, RAKI, Scan Specific Artifact Reduction in K-space (SPARK) and the proposed rRAKI. Additionally, highly-accelerated simultaneous multi-slice (SMS) functional MRI reconstructions were also performed, where the proposed rRAKI was compred to Read-out SENSE-GRAPPA and RAKI. Our results show that the proposed rRAKI method substantially improves the image quality compared to conventional parallel imaging, and offers sharper images compared to SPARK and ℓ1-SPIRiT. Furthermore, rRAKI shows improved preservation of time-varying dynamics compared to both parallel imaging and RAKI in highly-accelerated SMS fMRI.
Collapse
Affiliation(s)
- Chi Zhang
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Steen Moeller
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Omer Burak Demirel
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Kâmil Uğurbil
- Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA
| | - Mehmet Akçakaya
- Electrical and Computer Engineering, University of Minnesota, Minneapolis, MN 55455, USA; Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN 55455, USA.
| |
Collapse
|
92
|
Wang K, Tamir JI, De Goyeneche A, Wollner U, Brada R, Yu SX, Lustig M. High fidelity deep learning‐based MRI reconstruction with instance‐wise discriminative feature matching loss. Magn Reson Med 2022; 88:476-491. [DOI: 10.1002/mrm.29227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 02/08/2022] [Accepted: 02/22/2022] [Indexed: 11/12/2022]
Affiliation(s)
- Ke Wang
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
- International Computer Science Institute University of California at Berkeley Berkeley California USA
| | - Jonathan I. Tamir
- Electrical and Computer Engineering The University of Texas at Austin Austin Texas USA
| | - Alfredo De Goyeneche
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
| | | | | | - Stella X. Yu
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
- International Computer Science Institute University of California at Berkeley Berkeley California USA
| | - Michael Lustig
- Electrical Engineering and Computer Sciences University of California at Berkeley Berkeley California USA
| |
Collapse
|
93
|
Shen L, Yu L, Zhao W, Pauly J, Xing L. Novel-view X-ray projection synthesis through geometry-integrated deep learning. Med Image Anal 2022; 77:102372. [PMID: 35131701 PMCID: PMC8916089 DOI: 10.1016/j.media.2022.102372] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 01/14/2022] [Accepted: 01/16/2022] [Indexed: 01/12/2023]
Abstract
X-ray imaging is a widely used approach to view the internal structure of a subject for clinical diagnosis, image-guided interventions and decision-making. The X-ray projections acquired at different view angles provide complementary information of patient's anatomy and are required for stereoscopic or volumetric imaging of the subject. In reality, obtaining multiple-view projections inevitably increases radiation dose and complicates clinical workflow. Here we investigate a strategy of obtaining the X-ray projection image at a novel view angle from a given projection image at a specific view angle to alleviate the need for actual projection measurement. Specifically, a Deep Learning-based Geometry-Integrated Projection Synthesis (DL-GIPS) framework is proposed for the generation of novel-view X-ray projections. The proposed deep learning model extracts geometry and texture features from a source-view projection, and then conducts geometry transformation on the geometry features to accommodate the change of view angle. At the final stage, the X-ray projection in the target view is synthesized from the transformed geometry and the shared texture features via an image generator. The feasibility and potential impact of the proposed DL-GIPS model are demonstrated using lung imaging cases. The proposed strategy can be generalized to a general case of multiple projections synthesis from multiple input views and potentially provides a new paradigm for various stereoscopic and volumetric imaging with substantially reduced efforts in data acquisition.
Collapse
Affiliation(s)
- Liyue Shen
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| | - Lequan Yu
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - Wei Zhao
- Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| | - John Pauly
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Lei Xing
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Department of Radiation Oncology, Stanford University, Stanford, CA, USA
| |
Collapse
|
94
|
Monsour R, Dutta M, Mohamed AZ, Borkowski A, Viswanadhan NA. Neuroimaging in the Era of Artificial Intelligence: Current Applications. Fed Pract 2022; 39:S14-S20. [PMID: 35765692 PMCID: PMC9227741 DOI: 10.12788/fp.0231] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
BACKGROUND Artificial intelligence (AI) in medicine has shown significant promise, particularly in neuroimaging. AI increases efficiency and reduces errors, making it a valuable resource for physicians. With the increasing amount of data processing and image interpretation required, the ability to use AI to augment and aid the radiologist could improve the quality of patient care. OBSERVATIONS AI can predict patient wait times, which may allow more efficient patient scheduling. Additionally, AI can save time for repeat magnetic resonance neuroimaging and reduce the time spent during imaging. AI has the ability to read computed tomography, magnetic resonance imaging, and positron emission tomography with reduced or without contrast without significant loss in sensitivity for detecting lesions. Neuroimaging does raise important ethical considerations and is subject to bias. It is vital that users understand the practical and ethical considerations of the technology. CONCLUSIONS The demonstrated applications of AI in neuroimaging are numerous and varied, and it is reasonable to assume that its implementation will increase as the technology matures. AI's use for detecting neurologic conditions holds promise in combatting ever increasing imaging volumes and providing timely diagnoses.
Collapse
Affiliation(s)
- Robert Monsour
- University of South Florida Morsani College of Medicine, Tampa, Florida
| | - Mudit Dutta
- University of South Florida Morsani College of Medicine, Tampa, Florida
| | | | - Andrew Borkowski
- University of South Florida Morsani College of Medicine, Tampa, Florida
- James A. Haley Veterans’ Hospital, Tampa, Florida
| | - Narayan A. Viswanadhan
- University of South Florida Morsani College of Medicine, Tampa, Florida
- James A. Haley Veterans’ Hospital, Tampa, Florida
| |
Collapse
|
95
|
Shen B, Liu S, Li Y, Pan Y, Lu Y, Hu R, Qu J, Liu L. Deep learning autofluorescence-harmonic microscopy. LIGHT, SCIENCE & APPLICATIONS 2022; 11:76. [PMID: 35351853 PMCID: PMC8964717 DOI: 10.1038/s41377-022-00768-x] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 03/05/2022] [Accepted: 03/10/2022] [Indexed: 05/28/2023]
Abstract
Laser scanning microscopy has inherent tradeoffs between imaging speed, field of view (FOV), and spatial resolution due to the limitations of sophisticated mechanical and optical setups, and deep learning networks have emerged to overcome these limitations without changing the system. Here, we demonstrate deep learning autofluorescence-harmonic microscopy (DLAM) based on self-alignment attention-guided residual-in-residual dense generative adversarial networks to close the gap between speed, FOV, and quality. Using the framework, we demonstrate label-free large-field multimodal imaging of clinicopathological tissues with enhanced spatial resolution and running time advantages. Statistical quality assessments show that the attention-guided residual dense connections minimize the persistent noise, distortions, and scanning fringes that degrade the autofluorescence-harmonic images and avoid reconstruction artifacts in the output images. With the advantages of high contrast, high fidelity, and high speed in image reconstruction, DLAM can act as a powerful tool for the noninvasive evaluation of diseases, neural activity, and embryogenesis.
Collapse
Affiliation(s)
- Binglin Shen
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, 518060, Shenzhen, China
| | - Shaowen Liu
- Shenzhen Meitu Innovation Technology LTD, 518060, Shenzhen, China
| | - Yanping Li
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, 518060, Shenzhen, China
| | - Ying Pan
- China-Japan Union Hospital of Jilin University, 130033, Changchun, China
| | - Yuan Lu
- The Sixth People's Hospital of Shenzhen, 518052, Shenzhen, China
| | - Rui Hu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, 518060, Shenzhen, China
| | - Junle Qu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, 518060, Shenzhen, China
| | - Liwei Liu
- Key Laboratory of Optoelectronic Devices and Systems of Guangdong Province and Ministry of Education, College of Physics and Optoelectronic Engineering, Shenzhen University, 518060, Shenzhen, China.
| |
Collapse
|
96
|
Prasad S, Almekkawy M. DeepUCT: Complex cascaded deep learning network for improved ultrasound tomography. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac5296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2021] [Accepted: 02/07/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Ultrasound computed tomography is an inexpensive and radiation-free medical imaging technique used to quantify the tissue acoustic properties for advanced clinical diagnosis. Image reconstruction in ultrasound tomography is often modeled as an optimization scheme solved by iterative methods like full-waveform inversion. These iterative methods are computationally expensive, while the optimization problem is ill-posed and nonlinear. To address this problem, we propose to use deep learning to overcome the computational burden and ill-posedness, and achieve near real-time image reconstruction in ultrasound tomography. We aim to directly learn the mapping from the recorded time-series sensor data to a spatial image of acoustical properties. To accomplish this, we develop a deep learning model using two cascaded convolutional neural networks with an encoder–decoder architecture. We achieve a good representation by first extracting the intermediate mapping-knowledge and later utilizing this knowledge to reconstruct the image. This approach is evaluated on synthetic phantoms where simulated ultrasound data are acquired from a ring of transducers surrounding the region of interest. The measurement data is acquired by forward modeling the wave equation using the k-wave toolbox. Our simulation results demonstrate that our proposed deep-learning method is robust to noise and significantly outperforms the state-of-the-art traditional iterative method both quantitatively and qualitatively. Furthermore, our model takes substantially less computational time than the conventional full-wave inversion method.
Collapse
|
97
|
A Hemolysis Image Detection Method Based on GAN-CNN-ELM. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1558607. [PMID: 35242201 PMCID: PMC8888064 DOI: 10.1155/2022/1558607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 01/18/2022] [Indexed: 11/18/2022]
Abstract
Since manual hemolysis test methods are given priority with practical experience and its cost is high, the characteristics of hemolysis images are studied. A hemolysis image detection method based on generative adversarial networks (GANs) and convolutional neural networks (CNNs) with extreme learning machine (ELM) is proposed. First, the image enhancement and data enhancement are performed on a sample set, and GAN is used to expand the sample data volume. Second, CNN is used to extract the feature vectors of the processed images and label eigenvectors with one-hot encoding. Third, the feature matrix is input to the map in the ELM network to minimize the error and obtain the optimal weight by training. Finally, the image to be detected is input to the trained model, and the image with the greatest probability is selected as the final category. Through model comparison experiments, the results show that the hemolysis image detection method based on the GAN-CNN-ELM model is better than GAN-CNN, GAN-ELM, GAN-ELM-L1, GAN-SVM, GAN-CNN-SVM, and CNN-ELM in accuracy and speed, and the accuracy rate is 98.91%.
Collapse
|
98
|
Yurt M, Özbey M, UH Dar S, Tinaz B, Oguz KK, Çukur T. Progressively Volumetrized Deep Generative Models for Data-Efficient Contextual Learning of MR Image Recovery. Med Image Anal 2022; 78:102429. [DOI: 10.1016/j.media.2022.102429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2021] [Revised: 03/14/2022] [Accepted: 03/18/2022] [Indexed: 10/18/2022]
|
99
|
Rudie JD, Gleason T, Barkovich MJ, Wilson DM, Shankaranarayanan A, Zhang T, Wang L, Gong E, Zaharchuk G, Villanueva-Meyer JE. Clinical Assessment of Deep Learning-based Super-Resolution for 3D Volumetric Brain MRI. Radiol Artif Intell 2022; 4:e210059. [PMID: 35391765 PMCID: PMC8980882 DOI: 10.1148/ryai.210059] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2021] [Revised: 12/13/2021] [Accepted: 12/23/2021] [Indexed: 11/11/2022]
Abstract
Artificial intelligence (AI)-based image enhancement has the potential to reduce scan times while improving signal-to-noise ratio (SNR) and maintaining spatial resolution. This study prospectively evaluated AI-based image enhancement in 32 consecutive patients undergoing clinical brain MRI. Standard-of-care (SOC) three-dimensional (3D) T1 precontrast, 3D T2 fluid-attenuated inversion recovery, and 3D T1 postcontrast sequences were performed along with 45% faster versions of these sequences using half the number of phase-encoding steps. Images from the faster sequences were processed by a Food and Drug Administration-cleared AI-based image enhancement software for resolution enhancement. Four board-certified neuroradiologists scored the SOC and AI-enhanced image series independently on a five-point Likert scale for image SNR, anatomic conspicuity, overall image quality, imaging artifacts, and diagnostic confidence. While interrater κ was low to fair, the AI-enhanced scans were noninferior for all metrics and actually demonstrated a qualitative SNR improvement. Quantitative analyses showed that the AI software restored the high spatial resolution of small structures, such as the septum pellucidum. In conclusion, AI-based software can achieve noninferior image quality for 3D brain MRI sequences with a 45% scan time reduction, potentially improving the patient experience and scanner efficiency without sacrificing diagnostic quality. Keywords: MR Imaging, CNS, Brain/Brain Stem, Reconstruction Algorithms © RSNA, 2022.
Collapse
Affiliation(s)
- Jeffrey D. Rudie
- From the Department of Radiology & Biomedical Imaging,
University of California, San Francisco, 505 Parnassus Ave, L-352, San
Francisco, CA 94143 (J.D.R., T.G., M.J.B., D.M.W., J.E.V.M.); Subtle Medical,
Menlo Park, Calif (A.S., T.Z., L.W., E.G.); and Department of Radiology,
Stanford University, Stanford, Calif (G.Z.)
| | - Tyler Gleason
- From the Department of Radiology & Biomedical Imaging,
University of California, San Francisco, 505 Parnassus Ave, L-352, San
Francisco, CA 94143 (J.D.R., T.G., M.J.B., D.M.W., J.E.V.M.); Subtle Medical,
Menlo Park, Calif (A.S., T.Z., L.W., E.G.); and Department of Radiology,
Stanford University, Stanford, Calif (G.Z.)
| | - Matthew J. Barkovich
- From the Department of Radiology & Biomedical Imaging,
University of California, San Francisco, 505 Parnassus Ave, L-352, San
Francisco, CA 94143 (J.D.R., T.G., M.J.B., D.M.W., J.E.V.M.); Subtle Medical,
Menlo Park, Calif (A.S., T.Z., L.W., E.G.); and Department of Radiology,
Stanford University, Stanford, Calif (G.Z.)
| | - David M. Wilson
- From the Department of Radiology & Biomedical Imaging,
University of California, San Francisco, 505 Parnassus Ave, L-352, San
Francisco, CA 94143 (J.D.R., T.G., M.J.B., D.M.W., J.E.V.M.); Subtle Medical,
Menlo Park, Calif (A.S., T.Z., L.W., E.G.); and Department of Radiology,
Stanford University, Stanford, Calif (G.Z.)
| | - Ajit Shankaranarayanan
- From the Department of Radiology & Biomedical Imaging,
University of California, San Francisco, 505 Parnassus Ave, L-352, San
Francisco, CA 94143 (J.D.R., T.G., M.J.B., D.M.W., J.E.V.M.); Subtle Medical,
Menlo Park, Calif (A.S., T.Z., L.W., E.G.); and Department of Radiology,
Stanford University, Stanford, Calif (G.Z.)
| | - Tao Zhang
- From the Department of Radiology & Biomedical Imaging,
University of California, San Francisco, 505 Parnassus Ave, L-352, San
Francisco, CA 94143 (J.D.R., T.G., M.J.B., D.M.W., J.E.V.M.); Subtle Medical,
Menlo Park, Calif (A.S., T.Z., L.W., E.G.); and Department of Radiology,
Stanford University, Stanford, Calif (G.Z.)
| | - Long Wang
- From the Department of Radiology & Biomedical Imaging,
University of California, San Francisco, 505 Parnassus Ave, L-352, San
Francisco, CA 94143 (J.D.R., T.G., M.J.B., D.M.W., J.E.V.M.); Subtle Medical,
Menlo Park, Calif (A.S., T.Z., L.W., E.G.); and Department of Radiology,
Stanford University, Stanford, Calif (G.Z.)
| | - Enhao Gong
- From the Department of Radiology & Biomedical Imaging,
University of California, San Francisco, 505 Parnassus Ave, L-352, San
Francisco, CA 94143 (J.D.R., T.G., M.J.B., D.M.W., J.E.V.M.); Subtle Medical,
Menlo Park, Calif (A.S., T.Z., L.W., E.G.); and Department of Radiology,
Stanford University, Stanford, Calif (G.Z.)
| | - Greg Zaharchuk
- From the Department of Radiology & Biomedical Imaging,
University of California, San Francisco, 505 Parnassus Ave, L-352, San
Francisco, CA 94143 (J.D.R., T.G., M.J.B., D.M.W., J.E.V.M.); Subtle Medical,
Menlo Park, Calif (A.S., T.Z., L.W., E.G.); and Department of Radiology,
Stanford University, Stanford, Calif (G.Z.)
| | - Javier E. Villanueva-Meyer
- From the Department of Radiology & Biomedical Imaging,
University of California, San Francisco, 505 Parnassus Ave, L-352, San
Francisco, CA 94143 (J.D.R., T.G., M.J.B., D.M.W., J.E.V.M.); Subtle Medical,
Menlo Park, Calif (A.S., T.Z., L.W., E.G.); and Department of Radiology,
Stanford University, Stanford, Calif (G.Z.)
| |
Collapse
|
100
|
Pal A, Rathi Y. A review and experimental evaluation of deep learning methods for MRI reconstruction. THE JOURNAL OF MACHINE LEARNING FOR BIOMEDICAL IMAGING 2022; 1:001. [PMID: 35722657 PMCID: PMC9202830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Following the success of deep learning in a wide range of applications, neural network-based machine-learning techniques have received significant interest for accelerating magnetic resonance imaging (MRI) acquisition and reconstruction strategies. A number of ideas inspired by deep learning techniques for computer vision and image processing have been successfully applied to nonlinear image reconstruction in the spirit of compressed sensing for accelerated MRI. Given the rapidly growing nature of the field, it is imperative to consolidate and summarize the large number of deep learning methods that have been reported in the literature, to obtain a better understanding of the field in general. This article provides an overview of the recent developments in neural-network based approaches that have been proposed specifically for improving parallel imaging. A general background and introduction to parallel MRI is also given from a classical view of k-space based reconstruction methods. Image domain based techniques that introduce improved regularizers are covered along with k-space based methods which focus on better interpolation strategies using neural networks. While the field is rapidly evolving with plenty of papers published each year, in this review, we attempt to cover broad categories of methods that have shown good performance on publicly available data sets. Limitations and open problems are also discussed and recent efforts for producing open data sets and benchmarks for the community are examined.
Collapse
Affiliation(s)
- Arghya Pal
- Department of Psychiatry and Radiology, Harvard Medical School, Boston, MA, USA
| | - Yogesh Rathi
- Department of Psychiatry and Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|