1
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
2
|
Chen J, Chen S, Wee L, Dekker A, Bermejo I. Deep learning based unpaired image-to-image translation applications for medical physics: a systematic review. Phys Med Biol 2023; 68. [PMID: 36753766 DOI: 10.1088/1361-6560/acba74] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 02/08/2023] [Indexed: 02/10/2023]
Abstract
Purpose. There is a growing number of publications on the application of unpaired image-to-image (I2I) translation in medical imaging. However, a systematic review covering the current state of this topic for medical physicists is lacking. The aim of this article is to provide a comprehensive review of current challenges and opportunities for medical physicists and engineers to apply I2I translation in practice.Methods and materials. The PubMed electronic database was searched using terms referring to unpaired (unsupervised), I2I translation, and medical imaging. This review has been reported in compliance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. From each full-text article, we extracted information extracted regarding technical and clinical applications of methods, Transparent Reporting for Individual Prognosis Or Diagnosis (TRIPOD) study type, performance of algorithm and accessibility of source code and pre-trained models.Results. Among 461 unique records, 55 full-text articles were included in the review. The major technical applications described in the selected literature are segmentation (26 studies), unpaired domain adaptation (18 studies), and denoising (8 studies). In terms of clinical applications, unpaired I2I translation has been used for automatic contouring of regions of interest in MRI, CT, x-ray and ultrasound images, fast MRI or low dose CT imaging, CT or MRI only based radiotherapy planning, etc Only 5 studies validated their models using an independent test set and none were externally validated by independent researchers. Finally, 12 articles published their source code and only one study published their pre-trained models.Conclusion. I2I translation of medical images offers a range of valuable applications for medical physicists. However, the scarcity of external validation studies of I2I models and the shortage of publicly available pre-trained models limits the immediate applicability of the proposed methods in practice.
Collapse
Affiliation(s)
- Junhua Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Shenlun Chen
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Leonard Wee
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Inigo Bermejo
- Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| |
Collapse
|
3
|
Hong J, Lee SY, Lim JK, Lee J, Park J, Cha JG, Lee HJ, Kim D. Feasibility of Single-Shot Whole Thoracic Time-Resolved MR Angiography to Evaluate Patients with Multiple Pulmonary Arteriovenous Malformations. Korean J Radiol 2022; 23:794-802. [PMID: 35914744 PMCID: PMC9340233 DOI: 10.3348/kjr.2022.0140] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 03/27/2022] [Accepted: 06/06/2022] [Indexed: 11/15/2022] Open
Abstract
OBJECTIVE To evaluate the feasibility of single-shot whole thoracic time-resolved MR angiography (TR-MRA) to identify the feeding arteries of pulmonary arteriovenous malformations (PAVMs) and reperfusion of the lesion after embolization in patients with multiple PAVMs. MATERIALS AND METHODS Nine patients (8 females and 1 male; age range, 23-65 years) with a total of 62 PAVMs who underwent percutaneous embolization for multiple PAVMs and were subsequently followed up using TR-MRA and CT obtained within 6 months from each other were retrospectively reviewed. All imaging analyses were performed by two independent readers blinded to clinical information. The visibility of the feeding arteries on maximum intensity projection (MIP) reconstruction and multiplanar reconstruction (MPR) TR-MRA images was evaluated by comparing them to CT as a reference. The accuracy of TR-MRA for diagnosing reperfusion of the PAVM after embolization was assessed in a subgroup with angiographic confirmation. The reliability between the readers in interpreting the TR-MRA results was analyzed using kappa (κ) statistics. RESULTS Feeding arteries were visible on the original MIP images of TR-MRA in 82.3% (51/62) and 85.5% (53/62) of readers 1 and 2, respectively. Using the MPR, the rates increased to 93.5% (58/62) and 95.2% (59/62), respectively (κ = 0.760 and 0.792, respectively). Factors for invisibility were the course of feeding arteries in the anteroposterior plane, proximity to large enhancing vessels, adjacency to the chest wall, pulsation of the heart, and small feeding arteries. Thirty-seven PAVMs in five patients had angiographic confirmation of reperfusion status after embolization (32 occlusions and 5 reperfusions). TR-MRA showed 100% (5/5) sensitivity and 100% (32/32, including three cases in which the feeding arteries were not visible on TR-MRA) specificity for both readers. CONCLUSION Single-shot whole thoracic TR-MRA with MPR showed good visibility of the feeding arteries of PAVMs and high accuracy in diagnosing reperfusion after embolization. Single-shot whole thoracic TR-MRA may be a feasible method for the follow-up of patients with multiple PAVMs.
Collapse
Affiliation(s)
- Jihoon Hong
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, Korea
| | - Sang Yub Lee
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, Korea.
| | - Jae-Kwang Lim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, Korea
| | - Jongmin Lee
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, Korea
| | - Jongmin Park
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, Korea
| | - Jung Guen Cha
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, Korea
| | - Hui Joong Lee
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Hospital, Daegu, Korea
| | - Donghyeon Kim
- Department of Radiology, Gyeongbuk Regional Rehabilitation Hospital, Gyeongsan, Korea
| |
Collapse
|
4
|
Cha E, Chung H, Jang J, Lee J, Lee E, Ye JC. Low-Dose Sparse-View HAADF-STEM-EDX Tomography of Nanocrystals Using Unsupervised Deep Learning. ACS NANO 2022; 16:10314-10326. [PMID: 35729795 DOI: 10.1021/acsnano.2c00168] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
High-angle annular dark-field (HAADF) scanning transmission electron microscopy (STEM) can be acquired together with energy dispersive X-ray (EDX) spectroscopy to give complementary information on the nanoparticles being imaged. Recent deep learning approaches show potential for accurate 3D tomographic reconstruction for these applications, but a large number of high-quality electron micrographs are usually required for supervised training, which may be difficult to collect due to the damage on the particles from the electron beam. To overcome these limitations and enable tomographic reconstruction even in low-dose sparse-view conditions, here we present an unsupervised deep learning method for HAADF-STEM-EDX tomography. Specifically, to improve the EDX image quality from low-dose condition, a HAADF-constrained unsupervised denoising approach is proposed. Additionally, to enable extreme sparse-view tomographic reconstruction, an unsupervised view enrichment scheme is proposed in the projection domain. Extensive experiments with different types of quantum dots show that the proposed method offers a high-quality reconstruction even with only three projection views recorded under low-dose conditions.
Collapse
Affiliation(s)
- Eunju Cha
- Samsung Advanced Institute of Technology, Samsung Electronics, Gyeonggi-do 16678, Republic of Korea
| | - Hyungjin Chung
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Jaeduck Jang
- Samsung Advanced Institute of Technology, Samsung Electronics, Gyeonggi-do 16678, Republic of Korea
| | - Junho Lee
- Samsung Advanced Institute of Technology, Samsung Electronics, Gyeonggi-do 16678, Republic of Korea
| | - Eunha Lee
- Samsung Advanced Institute of Technology, Samsung Electronics, Gyeonggi-do 16678, Republic of Korea
| | - Jong Chul Ye
- Kim Jae Chul Graduate School of AI, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| |
Collapse
|
5
|
Festag S, Denzler J, Spreckelsen C. Generative Adversarial Networks for Biomedical Time Series Forecasting and Imputation A systematic review. J Biomed Inform 2022; 129:104058. [DOI: 10.1016/j.jbi.2022.104058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2021] [Revised: 02/23/2022] [Accepted: 03/22/2022] [Indexed: 10/18/2022]
|
6
|
Oh G, Lee JE, Ye JC. Unpaired MR Motion Artifact Deep Learning Using Outlier-Rejecting Bootstrap Aggregation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3125-3139. [PMID: 34133276 DOI: 10.1109/tmi.2021.3089708] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recently, deep learning approaches for MR motion artifact correction have been extensively studied. Although these approaches have shown high performance and lower computational complexity compared to classical methods, most of them require supervised training using paired artifact-free and artifact-corrupted images, which may prohibit its use in many important clinical applications. For example, transient severe motion (TSM) due to acute transient dyspnea in Gd-EOB-DTPA-enhanced MR is difficult to control and model for paired data generation. To address this issue, here we propose a novel unpaired deep learning scheme that does not require matched motion-free and motion artifact images. Specifically, the first step of our method is k -space random subsampling along the phase encoding direction that can remove some outliers probabilistically. In the second step, the neural network reconstructs fully sampled resolution image from a downsampled k -space data, and motion artifacts can be reduced in this step. Last, the aggregation step through averaging can further improve the results from the reconstruction network. We verify that our method can be applied for artifact correction from simulated motion as well as real motion from TSM successfully from both single and multi-coil data with and without k -space raw data, outperforming existing state-of-the-art deep learning methods.
Collapse
|
7
|
Aggarwal HK, Pramanik A, Jacob M. ENSURE: ENSEMBLE STEIN'S UNBIASED RISK ESTIMATOR FOR UNSUPERVISED LEARNING. PROCEEDINGS OF THE ... IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. ICASSP (CONFERENCE) 2021; 2021:10.1109/icassp39728.2021.9414513. [PMID: 34335103 PMCID: PMC8323317 DOI: 10.1109/icassp39728.2021.9414513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
Deep learning algorithms are emerging as powerful alternatives to compressed sensing methods, offering improved image quality and computational efficiency. Unfortunately, fully sampled training images may not be available or are difficult to acquire in several applications, including high-resolution and dynamic imaging. Previous studies in image reconstruction have utilized Stein's Unbiased Risk Estimator (SURE) as a mean square error (MSE) estimate for the image denoising step in an unrolled network. Unfortunately, the end-to-end training of a network using SURE remains challenging since the projected SURE loss is a poor approximation to the MSE, especially in the heavily undersampled setting. We propose an ENsemble SURE (ENSURE) approach to train a deep network only from undersampled measurements. In particular, we show that training a network using an ensemble of images, each acquired with a different sampling pattern, can closely approximate the MSE. Our preliminary experimental results show that the proposed ENSURE approach gives comparable reconstruction quality to supervised learning and a recent unsupervised learning method.
Collapse
|
8
|
Matsubara K, Ibaraki M, Shinohara Y, Takahashi N, Toyoshima H, Kinoshita T. Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images. Int J Comput Assist Radiol Surg 2021; 16:1865-1874. [PMID: 33821419 PMCID: PMC8589760 DOI: 10.1007/s11548-021-02356-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 03/17/2021] [Indexed: 11/26/2022]
Abstract
Purpose Oxygen extraction fraction (OEF) is a biomarker for the viability of brain tissue in ischemic stroke. However, acquisition of the OEF map using positron emission tomography (PET) with oxygen-15 gas is uncomfortable for patients because of the long fixation time, invasive arterial sampling, and radiation exposure. We aimed to predict the OEF map from magnetic resonance (MR) and PET images using a deep convolutional neural network (CNN) and to demonstrate which PET and MR images are optimal as inputs for the prediction of OEF maps. Methods Cerebral blood flow at rest (CBF) and during stress (sCBF), cerebral blood volume (CBV) maps acquired from oxygen-15 PET, and routine MR images (T1-, T2-, and T2*-weighted images) for 113 patients with steno-occlusive disease were learned with U-Net. MR and PET images acquired from the other 25 patients were used as test data. We compared the predicted OEF maps and intraclass correlation (ICC) with the real OEF values among combinations of MRI, CBF, CBV, and sCBF. Results Among the combinations of input images, OEF maps predicted by the model learned with MRI, CBF, CBV, and sCBF maps were the most similar to the real OEF maps (ICC: 0.597 ± 0.082). However, the contrast of predicted OEF maps was lower than that of real OEF maps. Conclusion These results suggest that the deep CNN learned useful features from CBF, sCBF, CBV, and MR images and predict qualitatively realistic OEF maps. These findings suggest that the deep CNN model can shorten the fixation time for 15O PET by skipping 15O2 scans. Further training with a larger data set is required to predict accurate OEF maps quantitatively. Supplementary Information The online version contains supplementary material available at 10.1007/s11548-021-02356-7.
Collapse
Affiliation(s)
- Keisuke Matsubara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, 6-10 Senshu-Kubota-machi, Akita, 010-0874, Japan.
| | - Masanobu Ibaraki
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, 6-10 Senshu-Kubota-machi, Akita, 010-0874, Japan
| | - Yuki Shinohara
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, 6-10 Senshu-Kubota-machi, Akita, 010-0874, Japan
| | - Noriyuki Takahashi
- Preparing Section for New Faculty of Medical Science, Fukushima Medical University, Fukushima, Japan
| | - Hideto Toyoshima
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, 6-10 Senshu-Kubota-machi, Akita, 010-0874, Japan
| | - Toshibumi Kinoshita
- Department of Radiology and Nuclear Medicine, Research Institute for Brain and Blood Vessels, Akita Cerebrospinal and Cardiovascular Center, 6-10 Senshu-Kubota-machi, Akita, 010-0874, Japan
| |
Collapse
|
9
|
Chung H, Cha E, Sunwoo L, Ye JC. Two-stage deep learning for accelerated 3D time-of-flight MRA without matched training data. Med Image Anal 2021; 71:102047. [PMID: 33895617 DOI: 10.1016/j.media.2021.102047] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 03/18/2021] [Accepted: 03/19/2021] [Indexed: 10/21/2022]
Abstract
Time-of-flight magnetic resonance angiography (TOF-MRA) is one of the most widely used non-contrast MR imaging methods to visualize blood vessels, but due to the 3-D volume acquisition highly accelerated acquisition is necessary. Accordingly, high quality reconstruction from undersampled TOF-MRA is an important research topic for deep learning. However, most existing deep learning works require matched reference data for supervised training, which are often difficult to obtain. By extending the recent theoretical understanding of cycleGAN from the optimal transport theory, here we propose a novel two-stage unsupervised deep learning approach, which is composed of the multi-coil reconstruction network along the coronal plane followed by a multi-planar refinement network along the axial plane. Specifically, the first network is trained in the square-root of sum of squares (SSoS) domain to achieve high quality parallel image reconstruction, whereas the second refinement network is designed to efficiently learn the characteristics of highly-activated blood flow using double-headed projection discriminator. Extensive experiments demonstrate that the proposed learning process without matched reference exceeds performance of state-of-the-art compressed sensing (CS)-based method and provides comparable or even better results than supervised learning approaches.
Collapse
Affiliation(s)
- Hyungjin Chung
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Eunju Cha
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea
| | - Leonard Sunwoo
- Department of Radiology, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Seongnam, Republic of Korea.
| | - Jong Chul Ye
- Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 34141, Republic of Korea.
| |
Collapse
|