1
|
Feng CM, Yang Z, Fu H, Xu Y, Yang J, Shao L. DONet: Dual-Octave Network for Fast MR Image Reconstruction. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2025; 36:3965-3975. [PMID: 34197326 DOI: 10.1109/tnnls.2021.3090303] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Magnetic resonance (MR) image acquisition is an inherently prolonged process, whose acceleration has long been the subject of research. This is commonly achieved by obtaining multiple undersampled images, simultaneously, through parallel imaging. In this article, we propose the dual-octave network (DONet), which is capable of learning multiscale spatial-frequency features from both the real and imaginary components of MR data, for parallel fast MR image reconstruction. More specifically, our DONet consists of a series of dual-octave convolutions (Dual-OctConvs), which are connected in a dense manner for better reuse of features. In each Dual-OctConv, the input feature maps and convolutional kernels are first split into two components (i.e., real and imaginary) and then divided into four groups according to their spatial frequencies. Then, our Dual-OctConv conducts intragroup information updating and intergroup information exchange to aggregate the contextual information across different groups. Our framework provides three appealing benefits: 1) it encourages information interaction and fusion between the real and imaginary components at various spatial frequencies to achieve richer representational capacity; 2) the dense connections between the real and imaginary groups in each Dual-OctConv make the propagation of features more efficient by feature reuse; and 3) DONet enlarges the receptive field by learning multiple spatial-frequency features of both the real and imaginary components. Extensive experiments on two popular datasets (i.e., clinical knee and fastMRI), under different undersampling patterns and acceleration factors, demonstrate the superiority of our model in accelerated parallel MR image reconstruction.
Collapse
|
2
|
Jiang MF, Chen YJ, Ruan DS, Yuan ZH, Zhang JC, Xia L. An improved low-rank plus sparse unrolling network method for dynamic magnetic resonance imaging. Med Phys 2025; 52:388-399. [PMID: 39607945 DOI: 10.1002/mp.17501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Revised: 08/09/2024] [Accepted: 09/07/2024] [Indexed: 11/30/2024] Open
Abstract
BACKGROUND Recent advances in deep learning have sparked new research interests in dynamic magnetic resonance imaging (MRI) reconstruction. However, existing deep learning-based approaches suffer from insufficient reconstruction efficiency and accuracy due to the lack of time correlation modeling during the reconstruction procedure. PURPOSE Inappropriate tensor processing steps and deep learning models may lead to not only a lack of modeling in the time dimension but also an increase in the overall size of the network. Therefore, this study aims to find suitable tensor processing methods and deep learning models to achieve better reconstruction results and a smaller network size. METHODS We propose a novel unrolling network method that enhances the reconstruction quality and reduces the parameter redundancy by introducing time correlation modeling into MRI reconstruction with low-rank core matrix and convolutional long short-term memory (ConvLSTM) unit. RESULTS We conduct extensive experiments on AMRG Cardiac MRI dataset to evaluate our proposed approach. The results demonstrate that compared to other state-of-the-art approaches, our approach achieves higher peak signal-to-noise ratios and structural similarity indices at different accelerator factors with significantly fewer parameters. CONCLUSIONS The improved reconstruction performance demonstrates that our proposed time correlation modeling is simple and effective for accelerating MRI reconstruction. We hope our approach can serve as a reference for future research in dynamic MRI reconstruction.
Collapse
Affiliation(s)
- Ming-Feng Jiang
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, Zhejiang, China
| | - Yun-Jiang Chen
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, Zhejiang, China
| | - Dong-Sheng Ruan
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, Zhejiang, China
| | - Zi-Han Yuan
- School of Computer Science and Technology, Zhejiang Sci-Tech University, Hangzhou, Zhejiang, China
| | - Ju-Cheng Zhang
- The Second Affiliated Hospital, School of Medicine Zhejiang University, Hangzhou, Zhejiang, China
| | - Ling Xia
- Department of Biomedical Engineering, Zhejiang University, Hangzhou, Zhejiang, China
| |
Collapse
|
3
|
Yarach U, Chatnuntawech I, Liao C, Teerapittayanon S, Iyer SS, Kim TH, Haldar J, Cho J, Bilgic B, Hu Y, Hargreaves B, Setsompop K. Blip-up blip-down circular EPI (BUDA-cEPI) for distortion-free dMRI with rapid unrolled deep learning reconstruction. Magn Reson Imaging 2025; 115:110277. [PMID: 39566835 DOI: 10.1016/j.mri.2024.110277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2024] [Revised: 11/09/2024] [Accepted: 11/13/2024] [Indexed: 11/22/2024]
Abstract
PURPOSE BUDA-cEPI has been shown to achieve high-quality, high-resolution diffusion magnetic resonance imaging (dMRI) with fast acquisition time, particularly when used in conjunction with S-LORAKS reconstruction. However, this comes at a cost of more complex reconstruction that is computationally prohibitive. In this work we develop rapid reconstruction pipeline for BUDA-cEPI to pave the way for its deployment in routine clinical and neuroscientific applications. The proposed reconstruction includes the development of ML-based unrolled reconstruction as well as rapid ML-based B0 and eddy current estimations that are needed. The architecture of the unroll network was designed so that it can mimic S-LORAKS regularization well, with the addition of virtual coil channels. METHODS BUDA-cEPI RUN-UP - a model-based framework that incorporates off-resonance and eddy current effects was unrolled through an artificial neural network with only six gradient updates. The unrolled network alternates between data consistency (i.e., forward BUDA-cEPI and its adjoint) and regularization steps where U-Net plays a role as the regularizer. To handle the partial Fourier effect, the virtual coil concept was also introduced into the reconstruction to effectively take advantage of the smooth phase prior and trained to predict the ground-truth images obtained by BUDA-cEPI with S-LORAKS. RESULTS The introduction of the Virtual Coil concept into the unrolled network was shown to be key to achieving high-quality reconstruction for BUDA-cEPI. With the inclusion of an additional non-diffusion image (b-value = 0 s/mm2), a slight improvement was observed, with the normalized root mean square error further reduced by approximately 5 %. The reconstruction times for S-LORAKS and the proposed unrolled networks were approximately 225 and 3 s per slice, respectively. CONCLUSION BUDA-cEPI RUN-UP was shown to reduce the reconstruction time by ∼88× when compared to the state-of-the-art technique, while preserving imaging details as demonstrated through DTI application.
Collapse
Affiliation(s)
- Uten Yarach
- Radiologic Technology Department, Associated Medical Sciences, Chiang Mai University, Chiang Mai, Thailand
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| | - Congyu Liao
- Department of Radiology, Stanford University, Stanford, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Surat Teerapittayanon
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| | - Siddharth Srinivasan Iyer
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tae Hyung Kim
- Department of Computer Engineering, Hongik University, Seoul, South Korea
| | - Justin Haldar
- Signal and Image Processing Institute, Ming Hsieh Department of Electrical and Computer Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jaejin Cho
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Yuxin Hu
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | - Brian Hargreaves
- Department of Radiology, Stanford University, Stanford, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA; Department of Bioengineering, Stanford University, Stanford, California, USA
| | - Kawin Setsompop
- Department of Radiology, Stanford University, Stanford, CA, USA; Department of Electrical Engineering, Stanford University, Stanford, CA, USA.
| |
Collapse
|
4
|
Millard C, Chiew M. Clean Self-Supervised MRI Reconstruction from Noisy, Sub-Sampled Training Data with Robust SSDU. Bioengineering (Basel) 2024; 11:1305. [PMID: 39768122 PMCID: PMC11726718 DOI: 10.3390/bioengineering11121305] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2024] [Revised: 12/14/2024] [Accepted: 12/16/2024] [Indexed: 01/16/2025] Open
Abstract
Most existing methods for magnetic resonance imaging (MRI) reconstruction with deep learning use fully supervised training, which assumes that a fully sampled dataset with a high signal-to-noise ratio (SNR) is available for training. In many circumstances, however, such a dataset is highly impractical or even technically infeasible to acquire. Recently, a number of self-supervised methods for MRI reconstruction have been proposed, which use sub-sampled data only. However, the majority of such methods, such as Self-Supervised Learning via Data Undersampling (SSDU), are susceptible to reconstruction errors arising from noise in the measured data. In response, we propose Robust SSDU, which provably recovers clean images from noisy, sub-sampled training data by simultaneously estimating missing k-space samples and denoising the available samples. Robust SSDU trains the reconstruction network to map from a further noisy and sub-sampled version of the data to the original, singly noisy, and sub-sampled data and applies an additive Noisier2Noise correction term upon inference. We also present a related method, Noiser2Full, that recovers clean images when noisy, fully sampled data are available for training. Both proposed methods are applicable to any network architecture, are straightforward to implement, and have a similar computational cost to standard training. We evaluate our methods on the multi-coil fastMRI brain dataset with novel denoising-specific architecture and find that it performs competitively with a benchmark trained on clean, fully sampled data.
Collapse
Affiliation(s)
- Charles Millard
- Wellcome Centre for Integrative Neuroimaging, FMRIB, University of Oxford, Oxford OX3 9DU, UK
| | - Mark Chiew
- Department of Medical Biophysics, University of Toronto, Toronto, ON M4N 3M5, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
5
|
Wang S, Wu R, Jia S, Diakite A, Li C, Liu Q, Zheng H, Ying L. Knowledge-driven deep learning for fast MR imaging: Undersampled MR image reconstruction from supervised to un-supervised learning. Magn Reson Med 2024; 92:496-518. [PMID: 38624162 DOI: 10.1002/mrm.30105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 03/19/2024] [Accepted: 03/20/2024] [Indexed: 04/17/2024]
Abstract
Deep learning (DL) has emerged as a leading approach in accelerating MRI. It employs deep neural networks to extract knowledge from available datasets and then applies the trained networks to reconstruct accurate images from limited measurements. Unlike natural image restoration problems, MRI involves physics-based imaging processes, unique data properties, and diverse imaging tasks. This domain knowledge needs to be integrated with data-driven approaches. Our review will introduce the significant challenges faced by such knowledge-driven DL approaches in the context of fast MRI along with several notable solutions, which include learning neural networks and addressing different imaging application scenarios. The traits and trends of these techniques have also been given which have shifted from supervised learning to semi-supervised learning, and finally, to unsupervised learning methods. In addition, MR vendors' choices of DL reconstruction have been provided along with some discussions on open questions and future directions, which are critical for the reliable imaging systems.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ruoyou Wu
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Alou Diakite
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Cheng Li
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| | - Hairong Zheng
- Paul C Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and Department of Electrical Engineering, The State University of New York, Buffalo, New York, USA
| |
Collapse
|
6
|
Sharma R, Tsiamyrtzis P, Webb AG, Leiss EL, Tsekos NV. Learning to deep learning: statistics and a paradigm test in selecting a UNet architecture to enhance MRI. MAGMA (NEW YORK, N.Y.) 2024; 37:507-528. [PMID: 37989921 DOI: 10.1007/s10334-023-01127-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 09/30/2023] [Accepted: 10/16/2023] [Indexed: 11/23/2023]
Abstract
OBJECTIVE This study aims to assess the statistical significance of training parameters in 240 dense UNets (DUNets) used for enhancing low Signal-to-Noise Ratio (SNR) and undersampled MRI in various acquisition protocols. The objective is to determine the validity of differences between different DUNet configurations and their impact on image quality metrics. MATERIALS AND METHODS To achieve this, we trained all DUNets using the same learning rate and number of epochs, with variations in 5 acquisition protocols, 24 loss function weightings, and 2 ground truths. We calculated evaluation metrics for two metric regions of interest (ROI). We employed both Analysis of Variance (ANOVA) and Mixed Effects Model (MEM) to assess the statistical significance of the independent parameters, aiming to compare their efficacy in revealing differences and interactions among fixed parameters. RESULTS ANOVA analysis showed that, except for the acquisition protocol, fixed variables were statistically insignificant. In contrast, MEM analysis revealed that all fixed parameters and their interactions held statistical significance. This emphasizes the need for advanced statistical analysis in comparative studies, where MEM can uncover finer distinctions often overlooked by ANOVA. DISCUSSION These findings highlight the importance of utilizing appropriate statistical analysis when comparing different deep learning models. Additionally, the surprising effectiveness of the UNet architecture in enhancing various acquisition protocols underscores the potential for developing improved methods for characterizing and training deep learning models. This study serves as a stepping stone toward enhancing the transparency and comparability of deep learning techniques for medical imaging applications.
Collapse
Affiliation(s)
- Rishabh Sharma
- Medical Robotics and Imaging Lab, Department of Computer Science, 501, Philip G. Hoffman Hall, University of Houston, 4800 Calhoun Road, Houston, TX, 77204, USA
| | - Panagiotis Tsiamyrtzis
- Department of Mechanical Engineering, Politecnico Di Milano, Milan, Italy
- Department of Statistics, Athens University of Economics and Business, Athens, Greece
| | - Andrew G Webb
- C.J. Gorter Center for High Field MRI, Department of Radiology, Leiden University Medical Center, Leiden, The Netherlands
| | - Ernst L Leiss
- Department of Computer Science, University of Houston, Houston, TX, USA
| | - Nikolaos V Tsekos
- Medical Robotics and Imaging Lab, Department of Computer Science, 501, Philip G. Hoffman Hall, University of Houston, 4800 Calhoun Road, Houston, TX, 77204, USA.
| |
Collapse
|
7
|
Cheng J, Cui ZX, Zhu Q, Wang H, Zhu Y, Liang D. Integrating data distribution prior via Langevin dynamics for end-to-end MR reconstruction. Magn Reson Med 2024; 92:202-214. [PMID: 38469985 DOI: 10.1002/mrm.30065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Revised: 01/24/2024] [Accepted: 02/08/2024] [Indexed: 03/13/2024]
Abstract
PURPOSE To develop a novel deep learning-based method inheriting the advantages of data distribution prior and end-to-end training for accelerating MRI. METHODS Langevin dynamics is used to formulate image reconstruction with data distribution before facilitate image reconstruction. The data distribution prior is learned implicitly through the end-to-end adversarial training to mitigate the hyper-parameter selection and shorten the testing time compared to traditional probabilistic reconstruction. By seamlessly integrating the deep equilibrium model, the iteration of Langevin dynamics culminates in convergence to a fix-point, ensuring the stability of the learned distribution. RESULTS The feasibility of the proposed method is evaluated on the brain and knee datasets. Retrospective results with uniform and random masks show that the proposed method demonstrates superior performance both quantitatively and qualitatively than the state-of-the-art. CONCLUSION The proposed method incorporating Langevin dynamics with end-to-end adversarial training facilitates efficient and robust reconstruction for MRI. Empirical evaluations conducted on brain and knee datasets compellingly demonstrate the superior performance of the proposed method in terms of artifact removing and detail preserving.
Collapse
Affiliation(s)
- Jing Cheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Haifeng Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Key Laboratory of Biomedical Imaging Science and System, Chinese Academy of Sciences, Shenzhen, China
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
8
|
Fang K, Zheng X, Lin X, Dai Z. A comprehensive approach for osteoporosis detection through chest CT analysis and bone turnover markers: harnessing radiomics and deep learning techniques. Front Endocrinol (Lausanne) 2024; 15:1296047. [PMID: 38894742 PMCID: PMC11183288 DOI: 10.3389/fendo.2024.1296047] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 05/22/2024] [Indexed: 06/21/2024] Open
Abstract
Purpose The main objective of this study is to assess the possibility of using radiomics, deep learning, and transfer learning methods for the analysis of chest CT scans. An additional aim is to combine these techniques with bone turnover markers to identify and screen for osteoporosis in patients. Method A total of 488 patients who had undergone chest CT and bone turnover marker testing, and had known bone mineral density, were included in this study. ITK-SNAP software was used to delineate regions of interest, while radiomics features were extracted using Python. Multiple 2D and 3D deep learning models were trained to identify these regions of interest. The effectiveness of these techniques in screening for osteoporosis in patients was compared. Result Clinical models based on gender, age, and β-cross achieved an accuracy of 0.698 and an AUC of 0.665. Radiomics models, which utilized 14 selected radiomics features, achieved a maximum accuracy of 0.750 and an AUC of 0.739. The test group yielded promising results: the 2D Deep Learning model achieved an accuracy of 0.812 and an AUC of 0.855, while the 3D Deep Learning model performed even better with an accuracy of 0.854 and an AUC of 0.906. Similarly, the 2D Transfer Learning model achieved an accuracy of 0.854 and an AUC of 0.880, whereas the 3D Transfer Learning model exhibited an accuracy of 0.740 and an AUC of 0.737. Overall, the application of 3D deep learning and 2D transfer learning techniques on chest CT scans showed excellent screening performance in the context of osteoporosis. Conclusion Bone turnover markers may not be necessary for osteoporosis screening, as 3D deep learning and 2D transfer learning techniques utilizing chest CT scans proved to be equally effective alternatives.
Collapse
Affiliation(s)
- Kaibin Fang
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Xiaoling Zheng
- Aviation College, Liming Vocational University, Quanzhou, China
| | - Xiaocong Lin
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Zhangsheng Dai
- Department of Orthopaedic Surgery, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| |
Collapse
|
9
|
Kim J, Lee W, Kang B, Seo H, Park H. A noise robust image reconstruction using slice aware cycle interpolator network for parallel imaging in MRI. Med Phys 2024; 51:4143-4157. [PMID: 38598259 DOI: 10.1002/mp.17066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Revised: 03/01/2024] [Accepted: 03/23/2024] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND Reducing Magnetic resonance imaging (MRI) scan time has been an important issue for clinical applications. In order to reduce MRI scan time, imaging acceleration was made possible by undersampling k-space data. This is achieved by leveraging additional spatial information from multiple, independent receiver coils, thereby reducing the number of sampled k-space lines. PURPOSE The aim of this study is to develop a deep-learning method for parallel imaging with a reduced number of auto-calibration signals (ACS) lines in noisy environments. METHODS A cycle interpolator network is developed for robust reconstruction of parallel MRI with a small number of ACS lines in noisy environments. The network estimates missing (unsampled) lines of each coil data, and these estimated missing lines are then utilized to re-estimate the sampled k-space lines. In addition, a slice aware reconstruction technique is developed for noise-robust reconstruction while reducing the number of ACS lines. We conducted an evaluation study using retrospectively subsampled data obtained from three healthy volunteers at 3T MRI, involving three different slice thicknesses (1.5, 3.0, and 4.5 mm) and three different image contrasts (T1w, T2w, and FLAIR). RESULTS Despite the challenges posed by substantial noise in cases with a limited number of ACS lines and thinner slices, the slice aware cycle interpolator network reconstructs the enhanced parallel images. It outperforms RAKI, effectively eliminating aliasing artifacts. Moreover, the proposed network outperforms GRAPPA and demonstrates the ability to successfully reconstruct brain images even under severe noisy conditions. CONCLUSIONS The slice aware cycle interpolator network has the potential to improve reconstruction accuracy for a reduced number of ACS lines in noisy environments.
Collapse
Affiliation(s)
- Jeewon Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
- Bionics Research Center, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - Wonil Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Beomgu Kang
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Hyunseok Seo
- Bionics Research Center, Korea Institute of Science and Technology (KIST), Seoul, Republic of Korea
| | - HyunWook Park
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| |
Collapse
|
10
|
Yarach U, Chatnuntawech I, Setsompop K, Suwannasak A, Angkurawaranon S, Madla C, Hanprasertpong C, Sangpin P. Improved reconstruction for highly accelerated propeller diffusion 1.5 T clinical MRI. MAGMA (NEW YORK, N.Y.) 2024; 37:283-294. [PMID: 38386154 DOI: 10.1007/s10334-023-01142-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 12/11/2023] [Accepted: 12/13/2023] [Indexed: 02/23/2024]
Abstract
PURPOSE Propeller fast-spin-echo diffusion magnetic resonance imaging (FSE-dMRI) is essential for the diagnosis of Cholesteatoma. However, at clinical 1.5 T MRI, its signal-to-noise ratio (SNR) remains relatively low. To gain sufficient SNR, signal averaging (number of excitations, NEX) is usually used with the cost of prolonged scan time. In this work, we leveraged the benefits of Locally Low Rank (LLR) constrained reconstruction to enhance the SNR. Furthermore, we enhanced both the speed and SNR by employing Convolutional Neural Networks (CNNs) for the accelerated PROPELLER FSE-dMRI on a 1.5 T clinical scanner. METHODS Residual U-Net (RU-Net) was found to be efficient for propeller FSE-dMRI data. It was trained to predict 2-NEX images obtained by Locally Low Rank (LLR) constrained reconstruction and used 1-NEX images obtained via simplified reconstruction as the inputs. The brain scans from healthy volunteers and patients with cholesteatoma were performed for model training and testing. The performance of trained networks was evaluated with normalized root-mean-square-error (NRMSE), structural similarity index measure (SSIM), and peak SNR (PSNR). RESULTS For 4 × under-sampled with 7 blades data, online reconstruction appears to provide suboptimal images-some small details are missing due to high noise interferences. Offline LLR enables suppression of noises and discovering some small structures. RU-Net demonstrated further improvement compared to LLR by increasing 18.87% of PSNR, 2.11% of SSIM, and reducing 53.84% of NRMSE. Moreover, RU-Net is about 1500 × faster than LLR (0.03 vs. 47.59 s/slice). CONCLUSION The LLR remarkably enhances the SNR compared to online reconstruction. Moreover, RU-Net improves propeller FSE-dMRI as reflected in PSNR, SSIM, and NRMSE. It requires only 1-NEX data, which allows a 2 × scan time reduction. In addition, its speed is approximately 1500 times faster than that of LLR-constrained reconstruction.
Collapse
Affiliation(s)
- Uten Yarach
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, Thailand.
| | - Itthi Chatnuntawech
- National Nanotechnology Center, National Science and Technology Development Agency, Pathum Thani, Thailand
| | - Kawin Setsompop
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Atita Suwannasak
- Department of Radiologic Technology, Faculty of Associated Medical Sciences, Chiang Mai University, Chiang Mai, Thailand
| | - Salita Angkurawaranon
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Chakri Madla
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | - Charuk Hanprasertpong
- Department of Otolaryngology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand
| | | |
Collapse
|
11
|
Feng R, Wu Q, Feng J, She H, Liu C, Zhang Y, Wei H. IMJENSE: Scan-Specific Implicit Representation for Joint Coil Sensitivity and Image Estimation in Parallel MRI. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1539-1553. [PMID: 38090839 DOI: 10.1109/tmi.2023.3342156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/19/2023]
Abstract
Parallel imaging is a commonly used technique to accelerate magnetic resonance imaging (MRI) data acquisition. Mathematically, parallel MRI reconstruction can be formulated as an inverse problem relating the sparsely sampled k-space measurements to the desired MRI image. Despite the success of many existing reconstruction algorithms, it remains a challenge to reliably reconstruct a high-quality image from highly reduced k-space measurements. Recently, implicit neural representation has emerged as a powerful paradigm to exploit the internal information and the physics of partially acquired data to generate the desired object. In this study, we introduced IMJENSE, a scan-specific implicit neural representation-based method for improving parallel MRI reconstruction. Specifically, the underlying MRI image and coil sensitivities were modeled as continuous functions of spatial coordinates, parameterized by neural networks and polynomials, respectively. The weights in the networks and coefficients in the polynomials were simultaneously learned directly from sparsely acquired k-space measurements, without fully sampled ground truth data for training. Benefiting from the powerful continuous representation and joint estimation of the MRI image and coil sensitivities, IMJENSE outperforms conventional image or k-space domain reconstruction algorithms. With extremely limited calibration data, IMJENSE is more stable than supervised calibrationless and calibration-based deep-learning methods. Results show that IMJENSE robustly reconstructs the images acquired at 5× and 6× accelerations with only 4 or 8 calibration lines in 2D Cartesian acquisitions, corresponding to 22.0% and 19.5% undersampling rates. The high-quality results and scanning specificity make the proposed method hold the potential for further accelerating the data acquisition of parallel MRI.
Collapse
|
12
|
Oh C, Chung JY, Han Y. Domain transformation learning for MR image reconstruction from dual domain input. Comput Biol Med 2024; 170:108098. [PMID: 38330825 DOI: 10.1016/j.compbiomed.2024.108098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 01/22/2024] [Accepted: 02/02/2024] [Indexed: 02/10/2024]
Abstract
Medical images are acquired through diverse imaging systems, with each system employing specific image reconstruction techniques to transform sensor data into images. In MRI, sensor data (i.e., k-space data) is encoded in the frequency domain, and fully sampled k-space data is transformed into an image using the inverse Fourier Transform. However, in efforts to reduce acquisition time, k-space is often subsampled, necessitating a sophisticated image reconstruction method beyond a simple transform. The proposed approach addresses this challenge by training a model to learn domain transform, generating the final image directly from undersampled k-space input. Significantly, to improve the stability of reconstruction from randomly subsampled k-space data, folded images are incorporated as supplementary inputs in the dual-input ETER-net. Moreover, modifications are made to the formation of inputs for the bi-RNN stages to accommodate non-fixed k-space trajectories. Experimental validation, encompassing both regular and irregular sampling trajectories, validates the method's effectiveness. The results demonstrated superior performance, measured by PSNR, SSIM, and VIF, across acceleration factors of 4 and 8. In summary, the dual-input ETER-net emerges as an effective both regular and irregular sampling trajectories, and accommodating diverse acceleration factors.
Collapse
Affiliation(s)
- Changheun Oh
- Neuroscience Research Institute, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea
| | - Jun-Young Chung
- Neuroscience Research Institute, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea; Department of Neuroscience, College of Medicine, Gachon University, Incheon, 21565, Republic of Korea.
| | - Yeji Han
- Neuroscience Research Institute, Gachon University Gil Medical Center, Incheon, 21565, Republic of Korea; Department of Biomedical Engineering, Gachon University, Seongnam, 13120, Republic of Korea.
| |
Collapse
|
13
|
Cao C, Cui ZX, Zhu Q, Liu C, Liang D, Zhu Y. Annihilation-Net: Learned annihilation relation for dynamic MR imaging. Med Phys 2024; 51:1883-1898. [PMID: 37665786 DOI: 10.1002/mp.16723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 07/17/2023] [Accepted: 08/13/2023] [Indexed: 09/06/2023] Open
Abstract
BACKGROUND Deep learning methods driven by the low-rank regularization have achieved attractive performance in dynamic magnetic resonance (MR) imaging. The effectiveness of existing methods lies mainly in their ability to capture interframe relationships using network modules, which are lack interpretability. PURPOSE This study aims to design an interpretable methodology for modeling interframe relationships using convolutiona networks, namely Annihilation-Net and use it for accelerating dynamic MRI. METHODS Based on the equivalence between Hankel matrix product and convolution, we utilize convolutional networks to learn the null space transform for characterizing low-rankness. We employ low-rankness to represent interframe correlations in dynamic MR imaging, while combining with sparse constraints in the compressed sensing framework. The corresponding optimization problem is solved in an iterative form with the semi-quadratic splitting method (HQS). The iterative steps are unrolled into a network, dubbed Annihilation-Net. All the regularization parameters and null space transforms are set as learnable in the Annihilation-Net. RESULTS Experiments on the cardiac cine dataset show that the proposed model outperforms other competing methods both quantitatively and qualitatively. The training set and test set have 800 and 118 images, respectively. CONCLUSIONS The proposed Annihilation-Net improves the reconstruction quality of accelerated dynamic MRI with better interpretability.
Collapse
Affiliation(s)
- Chentao Cao
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Zhuo-Xu Cui
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Qingyong Zhu
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Congcong Liu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Dong Liang
- Research Center for Medical AI, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
14
|
Yang Q, Wang X, Cao X, Liu S, Xie F, Li Y. Multi-classification of national fitness test grades based on statistical analysis and machine learning. PLoS One 2023; 18:e0295674. [PMID: 38134133 PMCID: PMC10745189 DOI: 10.1371/journal.pone.0295674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2023] [Accepted: 11/27/2023] [Indexed: 12/24/2023] Open
Abstract
Physical fitness is a key element of a healthy life, and being overweight or lacking physical exercise will lead to health problems. Therefore, assessing an individual's physical health status from a non-medical, cost-effective perspective is essential. This paper aimed to evaluate the national physical health status through national physical examination data, selecting 12 indicators to divide the physical health status into four levels: excellent, good, pass, and fail. The existing challenge lies in the fact that most literature on physical fitness assessment mainly focuses on the two major groups of sports athletes and school students. Unfortunately, there is no reasonable index system has been constructed. The evaluation method has limitations and cannot be applied to other groups. This paper builds a reasonable health indicator system based on national physical examination data, breaks group restrictions, studies national groups, and hopes to use machine learning models to provide helpful health suggestions for citizens to measure their physical status. We analyzed the significance of the selected indicators through nonparametric tests and exploratory statistical analysis. We used seven machine learning models to obtain the best multi-classification model for the physical fitness test level. Comprehensive research showed that MLP has the best classification effect, with macro-precision reaching 74.4% and micro-precision reaching 72.8%. Furthermore, the recall rates are also above 70%, and the Hamming loss is the smallest, i.e., 0.272. The practical implications of these findings are significant. Individuals can use the classification model to understand their physical fitness level and status, exercise appropriately according to the measurement indicators, and adjust their lifestyle, which is an important aspect of health management.
Collapse
Affiliation(s)
- Qian Yang
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| | - Xueli Wang
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| | - Xianbing Cao
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| | - Shuai Liu
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| | - Feng Xie
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| | - Yumei Li
- School of Mathematics and Statistics, Beijing Technology and Business University, Beijing, China
| |
Collapse
|
15
|
Dar SUH, Öztürk Ş, Özbey M, Oguz KK, Çukur T. Parallel-stream fusion of scan-specific and scan-general priors for learning deep MRI reconstruction in low-data regimes. Comput Biol Med 2023; 167:107610. [PMID: 37883853 DOI: 10.1016/j.compbiomed.2023.107610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Revised: 09/20/2023] [Accepted: 10/17/2023] [Indexed: 10/28/2023]
Abstract
Magnetic resonance imaging (MRI) is an essential diagnostic tool that suffers from prolonged scan times. Reconstruction methods can alleviate this limitation by recovering clinically usable images from accelerated acquisitions. In particular, learning-based methods promise performance leaps by employing deep neural networks as data-driven priors. A powerful approach uses scan-specific (SS) priors that leverage information regarding the underlying physical signal model for reconstruction. SS priors are learned on each individual test scan without the need for a training dataset, albeit they suffer from computationally burdening inference with nonlinear networks. An alternative approach uses scan-general (SG) priors that instead leverage information regarding the latent features of MRI images for reconstruction. SG priors are frozen at test time for efficiency, albeit they require learning from a large training dataset. Here, we introduce a novel parallel-stream fusion model (PSFNet) that synergistically fuses SS and SG priors for performant MRI reconstruction in low-data regimes, while maintaining competitive inference times to SG methods. PSFNet implements its SG prior based on a nonlinear network, yet it forms its SS prior based on a linear network to maintain efficiency. A pervasive framework for combining multiple priors in MRI reconstruction is algorithmic unrolling that uses serially alternated projections, causing error propagation under low-data regimes. To alleviate error propagation, PSFNet combines its SS and SG priors via a novel parallel-stream architecture with learnable fusion parameters. Demonstrations are performed on multi-coil brain MRI for varying amounts of training data. PSFNet outperforms SG methods in low-data regimes, and surpasses SS methods with few tens of training samples. On average across tasks, PSFNet achieves 3.1 dB higher PSNR, 2.8% higher SSIM, and 0.3 × lower RMSE than baselines. Furthermore, in both supervised and unsupervised setups, PSFNet requires an order of magnitude lower samples compared to SG methods, and enables an order of magnitude faster inference compared to SS methods. Thus, the proposed model improves deep MRI reconstruction with elevated learning and computational efficiency.
Collapse
Affiliation(s)
- Salman Ul Hassan Dar
- Department of Internal Medicine III, Heidelberg University Hospital, 69120, Heidelberg, Germany; AI Health Innovation Cluster, Heidelberg, Germany
| | - Şaban Öztürk
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; Department of Electrical-Electronics Engineering, Amasya University, Amasya 05100, Turkey
| | - Muzaffer Özbey
- Department of Electrical and Computer Engineering, University of Illinois Urbana-Champaign, IL 61820, United States
| | - Kader Karli Oguz
- Department of Radiology, University of California, Davis, CA 95616, United States; Department of Radiology, Hacettepe University, Ankara, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; Department of Radiology, Hacettepe University, Ankara, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
16
|
Yu C, Guan Y, Ke Z, Lei K, Liang D, Liu Q. Universal generative modeling in dual domains for dynamic MRI. NMR IN BIOMEDICINE 2023; 36:e5011. [PMID: 37528575 DOI: 10.1002/nbm.5011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 06/13/2023] [Accepted: 07/06/2023] [Indexed: 08/03/2023]
Abstract
Dynamic magnetic resonance image reconstruction from incomplete k-space data has generated great research interest due to its ability to reduce scan time. Nevertheless, the reconstruction problem remains a thorny issue due to its ill posed nature. Recently, diffusion models, especially score-based generative models, have demonstrated great potential in terms of algorithmic robustness and flexibility of utilization. Moreover, a unified framework through the variance exploding stochastic differential equation is proposed to enable new sampling methods and further extend the capabilities of score-based generative models. Therefore, by taking advantage of the unified framework, we propose a k-space and image dual-domain collaborative universal generative model (DD-UGM), which combines the score-based prior with a low-rank regularization penalty to reconstruct highly under-sampled measurements. More precisely, we extract prior components from both image and k-space domains via a universal generative model and adaptively handle these prior components for faster processing while maintaining good generation quality. Experimental comparisons demonstrate the noise reduction and detail preservation abilities of the proposed method. Moreover, DD-UGM can reconstruct data of different frames by only training a single frame image, which reflects the flexibility of the proposed model.
Collapse
Affiliation(s)
- Chuanming Yu
- Department of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| | - Yu Guan
- Department of Mathematics and Computer Sciences, Nanchang University, Nanchang, China
| | - Ziwen Ke
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Ke Lei
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, SIAT, Chinese Academy of Sciences, Shenzhen, China
| | - Qiegen Liu
- Department of Electronic Information Engineering, Nanchang University, Nanchang, China
| |
Collapse
|
17
|
Jung K, Mandija S, Cui C, Kim J, Al‐masni MA, Meerbothe TG, Park M, van den Berg CAT, Kim D. Data-driven electrical conductivity brain imaging using 3 T MRI. Hum Brain Mapp 2023; 44:4986-5001. [PMID: 37466309 PMCID: PMC10502651 DOI: 10.1002/hbm.26421] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 06/14/2023] [Accepted: 07/03/2023] [Indexed: 07/20/2023] Open
Abstract
Magnetic resonance electrical properties tomography (MR-EPT) is a non-invasive measurement technique that derives the electrical properties (EPs, e.g., conductivity or permittivity) of tissues in the radiofrequency range (64 MHz for 1.5 T and 128 MHz for 3 T MR systems). Clinical studies have shown the potential of tissue conductivity as a biomarker. To date, model-based conductivity reconstructions rely on numerical assumptions and approximations, leading to inaccuracies in the reconstructed maps. To address such limitations, we propose an artificial neural network (ANN)-based non-linear conductivity estimator trained on simulated data for conductivity brain imaging. Network training was performed on 201 synthesized T2-weighted spin-echo (SE) data obtained from the finite-difference time-domain (FDTD) electromagnetic (EM) simulation. The dataset was composed of an approximated T2-w SE magnitude and transceive phase information. The proposed method was tested three in-silico and in-vivo on two volunteers and three patients' data. For comparison purposes, various conventional phase-based EPT reconstruction methods were used that ignoreB 1 + magnitude information, such as Savitzky-Golay kernel combined with Gaussian filter (S-G Kernel), phase-based convection-reaction EPT (cr-EPT), magnitude-weighted polynomial-fitting phase-based EPT (Poly-Fit), and integral-based phase-based EPT (Integral-based). From the in-silico experiments, quantitative analysis showed that the proposed method provides more accurate and improved quality (e.g., high structural preservation) conductivity maps compared to conventional reconstruction methods. Representatively, in the healthy brain in-silico phantom experiment, the proposed method yielded mean conductivity values of 1.97 ± 0.20 S/m for CSF, 0.33 ± 0.04 S/m for WM, and 0.52 ± 0.08 S/m for GM, which were closer to the ground-truth conductivity (2.00, 0.30, 0.50 S/m) than the integral-based method (2.56 ± 2.31, 0.39 ± 0.12, 0.68 ± 0.33 S/m). In-vivo ANN-based conductivity reconstructions were also of improved quality compared to conventional reconstructions and demonstrated network generalizability and robustness to in-vivo data and pathologies. The reported in-vivo brain conductivity values were in agreement with literatures. In addition, the proposed method was observed for various SNR levels (SNR levels = 10, 20, 40, and 58) and repeatability conditions (the eight acquisitions with the number of signal averages = 1). The preliminary investigations on brain tumor patient datasets suggest that the network trained on simulated dataset can generalize to unforeseen in-vivo pathologies, thus demonstrating its potential for clinical applications.
Collapse
Affiliation(s)
- Kyu‐Jin Jung
- Department of Electrical and Electronic EngineeringYonsei UniversitySeoulRepublic of Korea
| | - Stefano Mandija
- Computational Imaging Group for MR Therapy and DiagnosticsUniversity Medical Center UtrechtUtrechtThe Netherlands
- Department of RadiotherapyUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Chuanjiang Cui
- Department of Electrical and Electronic EngineeringYonsei UniversitySeoulRepublic of Korea
| | - Jun‐Hyeong Kim
- Department of Electrical and Electronic EngineeringYonsei UniversitySeoulRepublic of Korea
| | - Mohammed A. Al‐masni
- Department of Artificial IntelligenceCollege of Software & Convergence Technology, Daeyang AI Center, Sejong UniversitySeoulRepublic of Korea
| | - Thierry G. Meerbothe
- Computational Imaging Group for MR Therapy and DiagnosticsUniversity Medical Center UtrechtUtrechtThe Netherlands
- Department of RadiotherapyUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Mina Park
- Department of Radiology, Gangnam Severance HospitalYonsei University College of MedicineSeoulRepublic of Korea
| | - Cornelis A. T. van den Berg
- Computational Imaging Group for MR Therapy and DiagnosticsUniversity Medical Center UtrechtUtrechtThe Netherlands
- Department of RadiotherapyUniversity Medical Center UtrechtUtrechtThe Netherlands
| | - Dong‐Hyun Kim
- Department of Electrical and Electronic EngineeringYonsei UniversitySeoulRepublic of Korea
| |
Collapse
|
18
|
Wang S, Wu R, Li C, Zou J, Zhang Z, Liu Q, Xi Y, Zheng H. PARCEL: Physics-Based Unsupervised Contrastive Representation Learning for Multi-Coil MR Imaging. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:2659-2670. [PMID: 36219669 DOI: 10.1109/tcbb.2022.3213669] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
With the successful application of deep learning to magnetic resonance (MR) imaging, parallel imaging techniques based on neural networks have attracted wide attention. However, in the absence of high-quality, fully sampled datasets for training, the performance of these methods is limited. And the interpretability of models is not strong enough. To tackle this issue, this paper proposes a Physics-bAsed unsupeRvised Contrastive rEpresentation Learning (PARCEL) method to speed up parallel MR imaging. Specifically, PARCEL has a parallel framework to contrastively learn two branches of model-based unrolling networks from augmented undersampled multi-coil k-space data. A sophisticated co-training loss with three essential components has been designed to guide the two networks in capturing the inherent features and representations for MR images. And the final MR image is reconstructed with the trained contrastive networks. PARCEL was evaluated on two vivo datasets and compared to five state-of-the-art methods. The results show that PARCEL is able to learn essential representations for accurate MR reconstruction without relying on fully sampled datasets. The code will be made available at https://github.com/ternencewu123/PARCEL.
Collapse
|
19
|
Millard C, Chiew M. A Theoretical Framework for Self-Supervised MR Image Reconstruction Using Sub-Sampling via Variable Density Noisier2Noise. IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING 2023; 9:707-720. [PMID: 37600280 PMCID: PMC7614963 DOI: 10.1109/tci.2023.3299212] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/22/2023]
Abstract
In recent years, there has been attention on leveraging the statistical modeling capabilities of neural networks for reconstructing sub-sampled Magnetic Resonance Imaging (MRI) data. Most proposed methods assume the existence of a representative fully-sampled dataset and use fully-supervised training. However, for many applications, fully sampled training data is not available, and may be highly impractical to acquire. The development and understanding of self-supervised methods, which use only sub-sampled data for training, are therefore highly desirable. This work extends the Noisier2Noise framework, which was originally constructed for self-supervised denoising tasks, to variable density sub-sampled MRI data. We use the Noisier2Noise framework to analytically explain the performance of Self-Supervised Learning via Data Undersampling (SSDU), a recently proposed method that performs well in practice but until now lacked theoretical justification. Further, we propose two modifications of SSDU that arise as a consequence of the theoretical developments. Firstly, we propose partitioning the sampling set so that the subsets have the same type of distribution as the original sampling mask. Secondly, we propose a loss weighting that compensates for the sampling and partitioning densities. On the fastMRI dataset we show that these changes significantly improve SSDU's image restoration quality and robustness to the partitioning parameters.
Collapse
Affiliation(s)
- Charles Millard
- the Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, OX3 9DU Oxford, U.K
| | - Mark Chiew
- the Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, OX3 9DU Oxford, U.K., and with the Department of Medical Biophysics, University of Toronto, Toronto, ON M5S 1A1, Canada, and also with the Canada and Physical Sciences, Sunnybrook Research Institute, Toronto, ON M4N 3M5, Canada
| |
Collapse
|
20
|
Güngör A, Dar SU, Öztürk Ş, Korkmaz Y, Bedel HA, Elmas G, Ozbey M, Çukur T. Adaptive diffusion priors for accelerated MRI reconstruction. Med Image Anal 2023; 88:102872. [PMID: 37384951 DOI: 10.1016/j.media.2023.102872] [Citation(s) in RCA: 41] [Impact Index Per Article: 20.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 04/13/2023] [Accepted: 06/12/2023] [Indexed: 07/01/2023]
Abstract
Deep MRI reconstruction is commonly performed with conditional models that de-alias undersampled acquisitions to recover images consistent with fully-sampled data. Since conditional models are trained with knowledge of the imaging operator, they can show poor generalization across variable operators. Unconditional models instead learn generative image priors decoupled from the operator to improve reliability against domain shifts related to the imaging operator. Recent diffusion models are particularly promising given their high sample fidelity. Nevertheless, inference with a static image prior can perform suboptimally. Here we propose the first adaptive diffusion prior for MRI reconstruction, AdaDiff, to improve performance and reliability against domain shifts. AdaDiff leverages an efficient diffusion prior trained via adversarial mapping over large reverse diffusion steps. A two-phase reconstruction is executed following training: a rapid-diffusion phase that produces an initial reconstruction with the trained prior, and an adaptation phase that further refines the result by updating the prior to minimize data-consistency loss. Demonstrations on multi-contrast brain MRI clearly indicate that AdaDiff outperforms competing conditional and unconditional methods under domain shifts, and achieves superior or on par within-domain performance.
Collapse
Affiliation(s)
- Alper Güngör
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; ASELSAN Research Center, Ankara 06200, Turkey
| | - Salman Uh Dar
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Department of Internal Medicine III, Heidelberg University Hospital, Heidelberg 69120, Germany
| | - Şaban Öztürk
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Department of Electrical and Electronics Engineering, Amasya University, Amasya 05100, Turkey
| | - Yilmaz Korkmaz
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Hasan A Bedel
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Gokberk Elmas
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Muzaffer Ozbey
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey
| | - Tolga Çukur
- Department of Electrical and Electronics Engineering, Bilkent University, Ankara 06800, Turkey; National Magnetic Resonance Research Center (UMRAM), Bilkent University, Ankara 06800, Turkey; Neuroscience Program, Bilkent University, Ankara 06800, Turkey.
| |
Collapse
|
21
|
Huang MX, Angeles-Quinto A, Robb-Swan A, De-la-Garza BG, Huang CW, Cheng CK, Hesselink JR, Bigler ED, Wilde EA, Vaida F, Troyer EA, Max JE. Assessing Pediatric Mild Traumatic Brain Injury and Its Recovery Using Resting-State Magnetoencephalography Source Magnitude Imaging and Machine Learning. J Neurotrauma 2023; 40:1112-1129. [PMID: 36884305 PMCID: PMC10259613 DOI: 10.1089/neu.2022.0220] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/09/2023] Open
Abstract
The objectives of this machine-learning (ML) resting-state magnetoencephalography (rs-MEG) study involving children with mild traumatic brain injury (mTBI) and orthopedic injury (OI) controls were to define a neural injury signature of mTBI and to delineate the pattern(s) of neural injury that determine behavioral recovery. Children ages 8-15 years with mTBI (n = 59) and OI (n = 39) from consecutive admissions to an emergency department were studied prospectively for parent-rated post-concussion symptoms (PCS) at: 1) baseline (average of 3 weeks post-injury) to measure pre-injury symptoms and also concurrent symptoms; and 2) at 3-months post-injury. rs-MEG was conducted at the baseline assessment. The ML algorithm predicted cases of mTBI versus OI with sensitivity of 95.5 ± 1.6% and specificity of 90.2 ± 2.7% at 3-weeks post-injury for the combined delta-gamma frequencies. The sensitivity and specificity were significantly better (p < 0.0001) for the combined delta-gamma frequencies compared with the delta-only and gamma-only frequencies. There were also spatial differences in rs-MEG activity between mTBI and OI groups in both delta and gamma bands in frontal and temporal lobe, as well as more widespread differences in the brain. The ML algorithm accounted for 84.5% of the variance in predicting recovery measured by PCS changes between 3 weeks and 3 months post-injury in the mTBI group, and this was significantly lower (p < 10-4) in the OI group (65.6%). Frontal lobe pole (higher) gamma activity was significantly (p < 0.001) associated with (worse) PCS recovery exclusively in the mTBI group. These findings demonstrate a neural injury signature of pediatric mTBI and patterns of mTBI-induced neural injury related to behavioral recovery.
Collapse
Affiliation(s)
- Ming-Xiong Huang
- Department of Radiology, University of California, San Diego, California, USA
- Radiology and Research Services, VA San Diego Healthcare System, San Diego, California, USA
| | - Annemarie Angeles-Quinto
- Department of Radiology, University of California, San Diego, California, USA
- Radiology and Research Services, VA San Diego Healthcare System, San Diego, California, USA
| | - Ashley Robb-Swan
- Department of Radiology, University of California, San Diego, California, USA
- Radiology and Research Services, VA San Diego Healthcare System, San Diego, California, USA
| | | | - Charles W. Huang
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | - Chung-Kuan Cheng
- Department of Computer Science and Engineering, University of California, San Diego, California, USA
| | - John R. Hesselink
- Department of Radiology, University of California, San Diego, California, USA
| | - Erin D. Bigler
- Department of Neurology, University of Utah, Salt Lake City, Utah, USA
| | | | - Florin Vaida
- Herbert Wertheim School of Public Health, Division of Biostatistics and Bioinformatics, University of California, San Diego, California, USA
| | - Emily A. Troyer
- Department of Psychiatry, University of California, San Diego, California, USA
| | - Jeffrey E. Max
- Department of Psychiatry, University of California, San Diego, California, USA
- Department of Psychiatry, Rady Children's Hospital, San Diego, California, USA
| |
Collapse
|
22
|
Baik SM, Hong KS, Park DJ. Application and utility of boosting machine learning model based on laboratory test in the differential diagnosis of non-COVID-19 pneumonia and COVID-19. Clin Biochem 2023; 118:110584. [PMID: 37211061 PMCID: PMC10197431 DOI: 10.1016/j.clinbiochem.2023.05.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 05/06/2023] [Accepted: 05/17/2023] [Indexed: 05/23/2023]
Abstract
BACKGROUND Non-Coronavirus disease 2019 (COVID-19) pneumonia and COVID-19 have similar clinical features but last for different periods, and consequently, require different treatment protocols. Therefore, they must be differentially diagnosed. This study uses artificial intelligence (AI) to classify the two forms of pneumonia using mainly laboratory test data. METHODS Various AI models are applied, including boosting models known for deftly solving classification problems. In addition, important features that affect the classification prediction performance are identified using the feature importance technique and SHapley Additive exPlanations method. Despite the data imbalance, the developed model exhibits robust performance. RESULTS eXtreme gradient boosting, category boosting, and light gradient boosted machine yield an area under the receiver operating characteristic of 0.99 or more, accuracy of 0.96-0.97, and F1-score of 0.96-0.97. In addition, D-dimer, eosinophil, glucose, aspartate aminotransferase, and basophil, which are rather nonspecific laboratory test results, are demonstrated to be important features in differentiating the two disease groups. CONCLUSIONS The boosting model, which excels in producing classification models using categorical data, excels in developing classification models using linear numerical data, such as laboratory tests. Finally, the proposed model can be applied in various fields to solve classification problems.
Collapse
Affiliation(s)
- Seung Min Baik
- Division of Critical Care Medicine, Department of Surgery, Ewha Womans University Mokdong Hospital, Ewha Womans University College of Medicine, Seoul, Korea; Department of Surgery, Korea University College of Medicine, Seoul, Korea
| | - Kyung Sook Hong
- Division of Critical Care Medicine, Department of Surgery, Ewha Womans University Seoul Hospital, Ewha Womans University College of Medicine, Seoul, Korea
| | - Dong Jin Park
- Department of Laboratory Medicine, Eunpyeong St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Korea.
| |
Collapse
|
23
|
Wu Z, Liao W, Yan C, Zhao M, Liu G, Ma N, Li X. Deep learning based MRI reconstruction with transformer. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 233:107452. [PMID: 36924533 DOI: 10.1016/j.cmpb.2023.107452] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 02/19/2023] [Accepted: 02/27/2023] [Indexed: 06/18/2023]
Abstract
Magnetic resonance imaging (MRI) has become one of the most powerful imaging techniques in medical diagnosis, yet the prolonged scanning time becomes a bottleneck for application. Reconstruction methods based on compress sensing (CS) have made progress in reducing this cost by acquiring fewer points in the k-space. Traditional CS methods impose restrictions from different sparse domains to regularize the optimization that always requires balancing time with accuracy. Neural network techniques enable learning a better prior from sample pairs and generating the results in an analytic way. In this paper, we propose a deep learning based reconstruction method to restore high-quality MRI images from undersampled k-space data in an end-to-end style. Unlike prior literature adopting convolutional neural networks (CNN), advanced Swin Transformer is used as the backbone of our work, which proved to be powerful in extracting deep features of the image. In addition, we combined the k-space consistency in the output and further improved the quality. We compared our models with several reconstruction methods and variants, and the experiment results proved that our model achieves the best results in samples at low sampling rates. The source code of KTMR could be acquired at https://github.com/BITwzl/KTMR.
Collapse
Affiliation(s)
- Zhengliang Wu
- School of Computer Science & Technology, Beijing Institute of Technology, No. 5, South Street, Zhongguancun, Beijing, 100081, China.
| | - Weibin Liao
- School of Computer Science & Technology, Beijing Institute of Technology, No. 5, South Street, Zhongguancun, Beijing, 100081, China
| | - Chao Yan
- School of Computer Science & Technology, Beijing Institute of Technology, No. 5, South Street, Zhongguancun, Beijing, 100081, China
| | - Mangsuo Zhao
- Department of Neurology, Yuquan Hospital, School of Clinical Medicine, Tsinghua University, Beijing, 100039, China
| | - Guowen Liu
- Big Data and Engineering Research Center, Beijing Children's Hospital, Capital Medical University, Department of Echocardiography, Beijing, 100045, China; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University & Capital Medical University, Beijing, 100083, China
| | - Ning Ma
- Big Data and Engineering Research Center, Beijing Children's Hospital, Capital Medical University, Department of Echocardiography, Beijing, 100045, China; Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University & Capital Medical University, Beijing, 100083, China.
| | - Xuesong Li
- School of Computer Science & Technology, Beijing Institute of Technology, No. 5, South Street, Zhongguancun, Beijing, 100081, China.
| |
Collapse
|
24
|
Hossain MB, Kwon KC, Shinde RK, Imtiaz SM, Kim N. A Hybrid Residual Attention Convolutional Neural Network for Compressed Sensing Magnetic Resonance Image Reconstruction. Diagnostics (Basel) 2023; 13:diagnostics13071306. [PMID: 37046524 PMCID: PMC10093476 DOI: 10.3390/diagnostics13071306] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/20/2023] [Accepted: 03/29/2023] [Indexed: 04/03/2023] Open
Abstract
We propose a dual-domain deep learning technique for accelerating compressed sensing magnetic resonance image reconstruction. An advanced convolutional neural network with residual connectivity and an attention mechanism was developed for frequency and image domains. First, the sensor domain subnetwork estimates the unmeasured frequencies of k-space to reduce aliasing artifacts. Second, the image domain subnetwork performs a pixel-wise operation to remove blur and noisy artifacts. The skip connections efficiently concatenate the feature maps to alleviate the vanishing gradient problem. An attention gate in each decoder layer enhances network generalizability and speeds up image reconstruction by eliminating irrelevant activations. The proposed technique reconstructs real-valued clinical images from sparsely sampled k-spaces that are identical to the reference images. The performance of this novel approach was compared with state-of-the-art direct mapping, single-domain, and multi-domain methods. With acceleration factors (AFs) of 4 and 5, our method improved the mean peak signal-to-noise ratio (PSNR) to 8.67 and 9.23, respectively, compared with the single-domain Unet model; similarly, our approach increased the average PSNR to 3.72 and 4.61, respectively, compared with the multi-domain W-net. Remarkably, using an AF of 6, it enhanced the PSNR by 9.87 ± 1.55 and 6.60 ± 0.38 compared with Unet and W-net, respectively.
Collapse
|
25
|
Lyu J, Li Y, Yan F, Chen W, Wang C, Li R. Multi-channel GAN-based calibration-free diffusion-weighted liver imaging with simultaneous coil sensitivity estimation and reconstruction. Front Oncol 2023; 13:1095637. [PMID: 36845688 PMCID: PMC9945270 DOI: 10.3389/fonc.2023.1095637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 01/09/2023] [Indexed: 02/10/2023] Open
Abstract
INTRODUCTION Diffusion-weighted imaging (DWI) with parallel reconstruction may suffer from a mismatch between the coil calibration scan and imaging scan due to motions, especially for abdominal imaging. METHODS This study aimed to construct an iterative multichannel generative adversarial network (iMCGAN)-based framework for simultaneous sensitivity map estimation and calibration-free image reconstruction. The study included 106 healthy volunteers and 10 patients with tumors. RESULTS The performance of iMCGAN was evaluated in healthy participants and patients and compared with the SAKE, ALOHA-net, and DeepcomplexMRI reconstructions. The peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), root mean squared error (RMSE), and histograms of apparent diffusion coefficient (ADC) maps were calculated for assessing image qualities. The proposed iMCGAN outperformed the other methods in terms of the PSNR (iMCGAN: 41.82 ± 2.14; SAKE: 17.38 ± 1.78; ALOHA-net: 20.43 ± 2.11 and DeepcomplexMRI: 39.78 ± 2.78) for b = 800 DWI with an acceleration factor of 4. Besides, the ghosting artifacts in the SENSE due to the mismatch between the DW image and the sensitivity maps were avoided using the iMCGAN model. DISCUSSION The current model iteratively refined the sensitivity maps and the reconstructed images without additional acquisitions. Thus, the quality of the reconstructed image was improved, and the aliasing artifact was alleviated when motions occurred during the imaging procedure.
Collapse
Affiliation(s)
- Jun Lyu
- School of Computer and Control Engineering, Yantai University, Yantai, Shandong, China
| | - Yan Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Fuhua Yan
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Weibo Chen
- Philips Healthcare (China), Shanghai, China
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai, China
| | - Ruokun Li
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
- College of Health Science and Technology, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
26
|
Hossain MB, Kwon KC, Imtiaz SM, Nam OS, Jeon SH, Kim N. De-Aliasing and Accelerated Sparse Magnetic Resonance Image Reconstruction Using Fully Dense CNN with Attention Gates. Bioengineering (Basel) 2022; 10:22. [PMID: 36671594 PMCID: PMC9854709 DOI: 10.3390/bioengineering10010022] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 12/19/2022] [Accepted: 12/19/2022] [Indexed: 12/24/2022] Open
Abstract
When sparsely sampled data are used to accelerate magnetic resonance imaging (MRI), conventional reconstruction approaches produce significant artifacts that obscure the content of the image. To remove aliasing artifacts, we propose an advanced convolutional neural network (CNN) called fully dense attention CNN (FDA-CNN). We updated the Unet model with the fully dense connectivity and attention mechanism for MRI reconstruction. The main benefit of FDA-CNN is that an attention gate in each decoder layer increases the learning process by focusing on the relevant image features and provides a better generalization of the network by reducing irrelevant activations. Moreover, densely interconnected convolutional layers reuse the feature maps and prevent the vanishing gradient problem. Additionally, we also implement a new, proficient under-sampling pattern in the phase direction that takes low and high frequencies from the k-space both randomly and non-randomly. The performance of FDA-CNN was evaluated quantitatively and qualitatively with three different sub-sampling masks and datasets. Compared with five current deep learning-based and two compressed sensing MRI reconstruction techniques, the proposed method performed better as it reconstructed smoother and brighter images. Furthermore, FDA-CNN improved the mean PSNR by 2 dB, SSIM by 0.35, and VIFP by 0.37 compared with Unet for the acceleration factor of 5.
Collapse
Affiliation(s)
- Md. Biddut Hossain
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Ki-Chul Kwon
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Shariar Md Imtiaz
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Oh-Seung Nam
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| | - Seok-Hee Jeon
- Department of Electronics Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Gyeonggi-do, Republic of Korea
| | - Nam Kim
- School of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea
| |
Collapse
|
27
|
Wave-Encoded Model-Based Deep Learning for Highly Accelerated Imaging with Joint Reconstruction. BIOENGINEERING (BASEL, SWITZERLAND) 2022; 9:bioengineering9120736. [PMID: 36550942 PMCID: PMC9774601 DOI: 10.3390/bioengineering9120736] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Revised: 11/21/2022] [Accepted: 11/25/2022] [Indexed: 12/03/2022]
Abstract
A recently introduced model-based deep learning (MoDL) technique successfully incorporates convolutional neural network (CNN)-based regularizers into physics-based parallel imaging reconstruction using a small number of network parameters. Wave-controlled aliasing in parallel imaging (CAIPI) is an emerging parallel imaging method that accelerates imaging acquisition by employing sinusoidal gradients in the phase- and slice/partition-encoding directions during the readout to take better advantage of 3D coil sensitivity profiles. We propose wave-encoded MoDL (wave-MoDL) combining the wave-encoding strategy with unrolled network constraints for highly accelerated 3D imaging while enforcing data consistency. We extend wave-MoDL to reconstruct multicontrast data with CAIPI sampling patterns to leverage similarity between multiple images to improve the reconstruction quality. We further exploit this to enable rapid quantitative imaging using an interleaved look-locker acquisition sequence with T2 preparation pulse (3D-QALAS). Wave-MoDL enables a 40 s MPRAGE acquisition at 1 mm resolution at 16-fold acceleration. For quantitative imaging, wave-MoDL permits a 1:50 min acquisition for T1, T2, and proton density mapping at 1 mm resolution at 12-fold acceleration, from which contrast-weighted images can be synthesized as well. In conclusion, wave-MoDL allows rapid MR acquisition and high-fidelity image reconstruction and may facilitate clinical and neuroscientific applications by incorporating unrolled neural networks into wave-CAIPI reconstruction.
Collapse
|
28
|
Grandinetti J, Gao Y, Gonzalez Y, Deng J, Shen C, Jia X. MR image reconstruction from undersampled data for image-guided radiation therapy using a patient-specific deep manifold image prior. Front Oncol 2022; 12:1013783. [PMID: 36479074 PMCID: PMC9720169 DOI: 10.3389/fonc.2022.1013783] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Accepted: 10/31/2022] [Indexed: 06/13/2024] Open
Abstract
Introduction Recent advancements in radiotherapy (RT) have allowed for the integration of a Magnetic Resonance (MR) imaging scanner with a medical linear accelerator to use MR images for image guidance to position tumors against the treatment beam. Undersampling in MR acquisition is desired to accelerate the imaging process, but unavoidably deteriorates the reconstructed image quality. In RT, a high-quality MR image of a patient is available for treatment planning. In light of this unique clinical scenario, we proposed to exploit the patient-specific image prior to facilitate high-quality MR image reconstruction. Methods Utilizing the planning MR image, we established a deep auto-encoder to form a manifold of image patches of the patient. The trained manifold was then incorporated as a regularization to restore MR images of the same patient from undersampled data. We performed a simulation study using a patient case, a real patient study with three liver cancer patient cases, and a phantom experimental study using data acquired on an in-house small animal MR scanner. We compared the performance of the proposed method with those of the Fourier transform method, a tight-frame based Compressive Sensing method, and a deep learning method with a patient-generic manifold as the image prior. Results In the simulation study with 12.5% radial undersampling and 15% increase in noise, our method improved peak-signal-to-noise ratio by 4.46dB and structural similarity index measure by 28% compared to the patient-generic manifold method. In the experimental study, our method outperformed others by producing reconstructions of visually improved image quality.
Collapse
Affiliation(s)
| | | | | | | | | | - Xun Jia
- Innovative Technology of Radiotherapy Computations and Hardware (iTORCH) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States
| |
Collapse
|
29
|
Oh C, Chung JY, Han Y. An End-to-End Recurrent Neural Network for Radial MR Image Reconstruction. SENSORS (BASEL, SWITZERLAND) 2022; 22:7277. [PMID: 36236376 PMCID: PMC9572393 DOI: 10.3390/s22197277] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 09/20/2022] [Accepted: 09/23/2022] [Indexed: 06/16/2023]
Abstract
Recent advances in deep learning have contributed greatly to the field of parallel MR imaging, where a reduced amount of k-space data are acquired to accelerate imaging time. In our previous work, we have proposed a deep learning method to reconstruct MR images directly from k-space data acquired with Cartesian trajectories. However, MRI utilizes various non-Cartesian trajectories, such as radial trajectories, with various numbers of multi-channel RF coils according to the purpose of an MRI scan. Thus, it is important for a reconstruction network to efficiently unfold aliasing artifacts due to undersampling and to combine multi-channel k-space data into single-channel data. In this work, a neural network named 'ETER-net' is utilized to reconstruct an MR image directly from k-space data acquired with Cartesian and non-Cartesian trajectories and multi-channel RF coils. In the proposed image reconstruction network, the domain transform network converts k-space data into a rough image, which is then refined in the following network to reconstruct a final image. We also analyze loss functions including adversarial and perceptual losses to improve the network performance. For experiments, we acquired k-space data at a 3T MRI scanner with Cartesian and radial trajectories to show the learning mechanism of the direct mapping relationship between the k-space and the corresponding image by the proposed network and to demonstrate the practical applications. According to our experiments, the proposed method showed satisfactory performance in reconstructing images from undersampled single- or multi-channel k-space data with reduced image artifacts. In conclusion, the proposed method is a deep-learning-based MR reconstruction network, which can be used as a unified solution for parallel MRI, where k-space data are acquired with various scanning trajectories.
Collapse
Affiliation(s)
- Changheun Oh
- Neuroscience Research Institute, Gachon University, Incheon 21565, Korea
| | - Jun-Young Chung
- Department of Neuroscience, College of Medicine, Gachon University, Incheon 21565, Korea
| | - Yeji Han
- Department of Biomedical Engineering, Gachon University, Incheon 21936, Korea
| |
Collapse
|
30
|
Lu G, Zhang Y, Wang W, Miao L, Mou W. Machine Learning and Deep Learning CT-Based Models for Predicting the Primary Central Nervous System Lymphoma and Glioma Types: A Multicenter Retrospective Study. Front Neurol 2022; 13:905227. [PMID: 36110392 PMCID: PMC9469735 DOI: 10.3389/fneur.2022.905227] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Accepted: 06/23/2022] [Indexed: 11/13/2022] Open
Abstract
Purpose and BackgroundDistinguishing primary central nervous system lymphoma (PCNSL) and glioma on computed tomography (CT) is an important task since treatment options differ vastly from the two diseases. This study aims to explore various machine learning and deep learning methods based on radiomic features extracted from CT scans and end-to-end convolutional neural network (CNN) model to predict PCNSL and glioma types and compare the performance of different models.MethodsA total of 101 patients from five Chinese medical centers with pathologically confirmed PCNSL and glioma were analyzed retrospectively, including 50 PCNSL and 51 glioma. After manual segmentation of the region of interest (ROI) on CT scans, 293 radiomic features of each patient were extracted. The radiomic features were used as input, and then, we established six machine learning models and one deep learning model and three readers to identify the two types of tumors. We also established a 2D CNN model using raw CT scans as input. The area under the receiver operating characteristic curve (AUC) and accuracy (ACC) were used to evaluate different models.ResultsThe cohort was split into a training (70, 70% patients) and validation cohort (31,30% patients) according to the stratified sampling strategy. Among all models, the MLP performed best, with an accuracy of 0.886 and 0.903, sensitivity of 0.914 and 0.867, specificity of 0.857 and 0.937, and AUC of 0.957 and 0.908 in the training and validation cohorts, respectively, which was significantly higher than the three primary physician's diagnoses (ACCs ranged from 0.710 to 0.742, p < 0.001 for all) and comparable with the senior radiologist (ACC 0.839, p = 0.988). Among all the machine learning models, the AUC ranged from 0.605 to 0.821 in the validation cohort. The end-to-end CNN model achieved an AUC of 0.839 and an ACC of 0.840 in the validation cohort, which had no significant difference in accuracy compared to the MLP model (p = 0.472) and the senior radiologist (p = 0.470).ConclusionThe established PCNSL and glioma prediction model based on deep neural network methods from CT scans or radiomic features are feasible and provided high performance, which shows the potential to assist clinical decision-making.
Collapse
Affiliation(s)
- Guang Lu
- Department of Hematology, Shengli Oilfield Central Hospital, Dongying, China
| | - Yuxin Zhang
- Department of Neurosurgery, Guangrao County People's Hospital, Dongying, China
| | | | - Lixin Miao
- Department of Medical Imaging Center, Shengli Oilfield Central Hospital, Dongying, China
- *Correspondence: Lixin Miao
| | - Weiwei Mou
- Department of Pediatrics, Shengli Oilfield Central Hospital, Dongying, China
- Weiwei Mou
| |
Collapse
|
31
|
Beauferris Y, Teuwen J, Karkalousos D, Moriakov N, Caan M, Yiasemis G, Rodrigues L, Lopes A, Pedrini H, Rittner L, Dannecker M, Studenyak V, Gröger F, Vyas D, Faghih-Roohi S, Kumar Jethi A, Chandra Raju J, Sivaprakasam M, Lasby M, Nogovitsyn N, Loos W, Frayne R, Souza R. Multi-Coil MRI Reconstruction Challenge-Assessing Brain MRI Reconstruction Models and Their Generalizability to Varying Coil Configurations. Front Neurosci 2022; 16:919186. [PMID: 35873808 PMCID: PMC9298878 DOI: 10.3389/fnins.2022.919186] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Accepted: 06/01/2022] [Indexed: 11/13/2022] Open
Abstract
Deep-learning-based brain magnetic resonance imaging (MRI) reconstruction methods have the potential to accelerate the MRI acquisition process. Nevertheless, the scientific community lacks appropriate benchmarks to assess the MRI reconstruction quality of high-resolution brain images, and evaluate how these proposed algorithms will behave in the presence of small, but expected data distribution shifts. The multi-coil MRI (MC-MRI) reconstruction challenge provides a benchmark that aims at addressing these issues, using a large dataset of high-resolution, three-dimensional, T1-weighted MRI scans. The challenge has two primary goals: (1) to compare different MRI reconstruction models on this dataset and (2) to assess the generalizability of these models to data acquired with a different number of receiver coils. In this paper, we describe the challenge experimental design and summarize the results of a set of baseline and state-of-the-art brain MRI reconstruction models. We provide relevant comparative information on the current MRI reconstruction state-of-the-art and highlight the challenges of obtaining generalizable models that are required prior to broader clinical adoption. The MC-MRI benchmark data, evaluation code, and current challenge leaderboard are publicly available. They provide an objective performance assessment for future developments in the field of brain MRI reconstruction.
Collapse
Affiliation(s)
- Youssef Beauferris
- (AI) Lab, Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, Netherlands
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
- Innovation Centre for Artificial Intelligence – Artificial Intelligence for Oncology, University of Amsterdam, Amsterdam, Netherlands
| | - Dimitrios Karkalousos
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Centre, University of Amsterdam, Amsterdam, Netherlands
| | - Nikita Moriakov
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, Netherlands
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
| | - Matthan Caan
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Centre, University of Amsterdam, Amsterdam, Netherlands
| | - George Yiasemis
- Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam, Netherlands
- Innovation Centre for Artificial Intelligence – Artificial Intelligence for Oncology, University of Amsterdam, Amsterdam, Netherlands
| | - Lívia Rodrigues
- Medical Image Computing Lab, School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | - Alexandre Lopes
- Institute of Computing, University of Campinas, Campinas, Brazil
| | - Helio Pedrini
- Institute of Computing, University of Campinas, Campinas, Brazil
| | - Letícia Rittner
- Medical Image Computing Lab, School of Electrical and Computer Engineering, University of Campinas, Campinas, Brazil
| | - Maik Dannecker
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Viktor Studenyak
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Fabian Gröger
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | - Devendra Vyas
- Computer Aided Medical Procedures, Technical University of Munich, Munich, Germany
| | | | - Amrit Kumar Jethi
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
| | - Jaya Chandra Raju
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
| | - Mohanasankar Sivaprakasam
- Department of Electrical Engineering, Indian Institute of Technology Madras, Chennai, India
- Healthcare Technology Innovation Centre, Indian Institute of Technology Madras, Chennai, India
| | - Mike Lasby
- (AI) Lab, Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| | - Nikita Nogovitsyn
- Centre for Depression and Suicide Studies, St. Michael's Hospital, Toronto, ON, Canada
- Mood Disorders Program, Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada
| | - Wallace Loos
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Radiology and Clinical Neurosciences, University of Calgary, Calgary, AB, Canada
- Seaman Family MR Research Centre, Foothills Medical Center, Calgary, AB, Canada
| | - Richard Frayne
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
- Radiology and Clinical Neurosciences, University of Calgary, Calgary, AB, Canada
- Seaman Family MR Research Centre, Foothills Medical Center, Calgary, AB, Canada
| | - Roberto Souza
- (AI) Lab, Electrical and Software Engineering, University of Calgary, Calgary, AB, Canada
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada
| |
Collapse
|
32
|
Korkmaz Y, Dar SUH, Yurt M, Ozbey M, Cukur T. Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1747-1763. [PMID: 35085076 DOI: 10.1109/tmi.2022.3147426] [Citation(s) in RCA: 88] [Impact Index Per Article: 29.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Supervised reconstruction models are characteristically trained on matched pairs of undersampled and fully-sampled data to capture an MRI prior, along with supervision regarding the imaging operator to enforce data consistency. To reduce supervision requirements, the recent deep image prior framework instead conjoins untrained MRI priors with the imaging operator during inference. Yet, canonical convolutional architectures are suboptimal in capturing long-range relationships, and priors based on randomly initialized networks may yield suboptimal performance. To address these limitations, here we introduce a novel unsupervised MRI reconstruction method based on zero-Shot Learned Adversarial TransformERs (SLATER). SLATER embodies a deep adversarial network with cross-attention transformers to map noise and latent variables onto coil-combined MR images. During pre-training, this unconditional network learns a high-quality MRI prior in an unsupervised generative modeling task. During inference, a zero-shot reconstruction is then performed by incorporating the imaging operator and optimizing the prior to maximize consistency to undersampled data. Comprehensive experiments on brain MRI datasets clearly demonstrate the superior performance of SLATER against state-of-the-art unsupervised methods.
Collapse
|
33
|
Jung W, Bollmann S, Lee J. Overview of quantitative susceptibility mapping using deep learning: Current status, challenges and opportunities. NMR IN BIOMEDICINE 2022; 35:e4292. [PMID: 32207195 DOI: 10.1002/nbm.4292] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Revised: 02/04/2020] [Accepted: 02/25/2020] [Indexed: 06/10/2023]
Abstract
Quantitative susceptibility mapping (QSM) has gained broad interest in the field by extracting bulk tissue magnetic susceptibility, predominantly determined by myelin, iron and calcium from magnetic resonance imaging (MRI) phase measurements in vivo. Thereby, QSM can reveal pathological changes of these key components in a variety of diseases. QSM requires multiple processing steps such as phase unwrapping, background field removal and field-to-source inversion. Current state-of-the-art techniques utilize iterative optimization procedures to solve the inversion and background field correction, which are computationally expensive and require a careful choice of regularization parameters. With the recent success of deep learning using convolutional neural networks for solving ill-posed reconstruction problems, the QSM community also adapted these techniques and demonstrated that the QSM processing steps can be solved by efficient feed forward multiplications not requiring either iterative optimization or the choice of regularization parameters. Here, we review the current status of deep learning-based approaches for processing QSM, highlighting limitations and potential pitfalls, and discuss the future directions the field may take to exploit the latest advances in deep learning for QSM.
Collapse
Affiliation(s)
- Woojin Jung
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Steffen Bollmann
- ARC Training Centre for Innovation in Biomedical Imaging Technology, The University of Queensland, Brisbane, Australia
- Centre for Advanced Imaging, The University of Queensland, Brisbane, Australia
| | - Jongho Lee
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| |
Collapse
|
34
|
Chatterjee S, Breitkopf M, Sarasaen C, Yassin H, Rose G, Nürnberger A, Speck O. ReconResNet: Regularised residual learning for MR image reconstruction of Undersampled Cartesian and Radial data. Comput Biol Med 2022; 143:105321. [PMID: 35219188 DOI: 10.1016/j.compbiomed.2022.105321] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 01/29/2022] [Accepted: 02/11/2022] [Indexed: 11/03/2022]
Abstract
MRI is an inherently slow process, which leads to long scan time for high-resolution imaging. The speed of acquisition can be increased by ignoring parts of the data (undersampling). Consequently, this leads to the degradation of image quality, such as loss of resolution or introduction of image artefacts. This work aims to reconstruct highly undersampled Cartesian or radial MR acquisitions, with better resolution and with less to no artefact compared to conventional techniques like compressed sensing. In recent times, deep learning has emerged as a very important area of research and has shown immense potential in solving inverse problems, e.g. MR image reconstruction. In this paper, a deep learning based MR image reconstruction framework is proposed, which includes a modified regularised version of ResNet as the network backbone to remove artefacts from the undersampled image, followed by data consistency steps that fusions the network output with the data already available from undersampled k-space in order to further improve reconstruction quality. The performance of this framework for various undersampling patterns has also been tested, and it has been observed that the framework is robust to deal with various sampling patterns, even when mixed together while training, and results in very high quality reconstruction, in terms of high SSIM (highest being 0.990 ± 0.006 for acceleration factor of 3.5), while being compared with the fully sampled reconstruction. It has been shown that the proposed framework can successfully reconstruct even for an acceleration factor of 20 for Cartesian (0.968 ± 0.005) and 17 for radially (0.962 ± 0.012) sampled data. Furthermore, it has been shown that the framework preserves brain pathology during reconstruction while being trained on healthy subjects.
Collapse
Affiliation(s)
- Soumick Chatterjee
- Faculty of Computer Science, Otto von Guericke University, Magdeburg, Germany; Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany; Biomedical Magnetic Resonance, Otto von Guericke University, Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University, Magdeburg, Germany.
| | - Mario Breitkopf
- Biomedical Magnetic Resonance, Otto von Guericke University, Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University, Magdeburg, Germany
| | - Chompunuch Sarasaen
- Biomedical Magnetic Resonance, Otto von Guericke University, Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University, Magdeburg, Germany; Institute for Medical Engineering, Otto von Guericke University, Magdeburg, Germany
| | - Hadya Yassin
- Institute for Medical Engineering, Otto von Guericke University, Magdeburg, Germany
| | - Georg Rose
- Research Campus STIMULATE, Otto von Guericke University, Magdeburg, Germany; Institute for Medical Engineering, Otto von Guericke University, Magdeburg, Germany
| | - Andreas Nürnberger
- Faculty of Computer Science, Otto von Guericke University, Magdeburg, Germany; Data and Knowledge Engineering Group, Otto von Guericke University, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany
| | - Oliver Speck
- Biomedical Magnetic Resonance, Otto von Guericke University, Magdeburg, Germany; Research Campus STIMULATE, Otto von Guericke University, Magdeburg, Germany; German Center for Neurodegenerative Disease, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany; Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
35
|
Wang S, Ke Z, Cheng H, Jia S, Ying L, Zheng H, Liang D. DIMENSION: Dynamic MR imaging with both k-space and spatial prior knowledge obtained via multi-supervised network training. NMR IN BIOMEDICINE 2022; 35:e4131. [PMID: 31482598 DOI: 10.1002/nbm.4131] [Citation(s) in RCA: 40] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Revised: 05/21/2019] [Accepted: 05/22/2019] [Indexed: 06/10/2023]
Abstract
Dynamic MR image reconstruction from incomplete k-space data has generated great research interest due to its capability in reducing scan time. Nevertheless, the reconstruction problem is still challenging due to its ill-posed nature. Most existing methods either suffer from long iterative reconstruction time or explore limited prior knowledge. This paper proposes a dynamic MR imaging method with both k-space and spatial prior knowledge integrated via multi-supervised network training, dubbed as DIMENSION. Specifically, the DIMENSION architecture consists of a frequential prior network for updating the k-space with its network prediction and a spatial prior network for capturing image structures and details. Furthermore, a multi-supervised network training technique is developed to constrain the frequency domain information and the spatial domain information. The comparisons with classical k-t FOCUSS, k-t SLR, L+S and the state-of-the-art CNN-based method on in vivo datasets show our method can achieve improved reconstruction results in shorter time.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ziwen Ke
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China
| | - Huitao Cheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Sen Jia
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Leslie Ying
- Department of Biomedical Engineering and the Department of Electrical Engineering, The State University of New York, Buffalo, NY, USA
| | - Hairong Zheng
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| |
Collapse
|
36
|
Arefeen Y, Beker O, Cho J, Yu H, Adalsteinsson E, Bilgic B. Scan-specific artifact reduction in k-space (SPARK) neural networks synergize with physics-based reconstruction to accelerate MRI. Magn Reson Med 2022; 87:764-780. [PMID: 34601751 PMCID: PMC8627503 DOI: 10.1002/mrm.29036] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 09/19/2021] [Accepted: 09/20/2021] [Indexed: 02/03/2023]
Abstract
PURPOSE To develop a scan-specific model that estimates and corrects k-space errors made when reconstructing accelerated MRI data. METHODS Scan-specific artifact reduction in k-space (SPARK) trains a convolutional-neural-network to estimate and correct k-space errors made by an input reconstruction technique by back-propagating from the mean-squared-error loss between an auto-calibration signal (ACS) and the input technique's reconstructed ACS. First, SPARK is applied to generalized autocalibrating partially parallel acquisitions (GRAPPA) and demonstrates improved robustness over other scan-specific models, such as robust artificial-neural-networks for k-space interpolation (RAKI) and residual-RAKI. Subsequent experiments demonstrate that SPARK synergizes with residual-RAKI to improve reconstruction performance. SPARK also improves reconstruction quality when applied to advanced acquisition and reconstruction techniques like 2D virtual coil (VC-) GRAPPA, 2D LORAKS, 3D GRAPPA without an integrated ACS region, and 2D/3D wave-encoded imaging. RESULTS SPARK yields SSIM improvement and 1.5 - 2× root mean squared error (RMSE) reduction when applied to GRAPPA and improves robustness to ACS size for various acceleration rates in comparison to other scan-specific techniques. When applied to advanced reconstruction techniques such as residual-RAKI, 2D VC-GRAPPA and LORAKS, SPARK achieves up to 20% RMSE improvement. SPARK with 3D GRAPPA also improves RMSE performance by ~2×, SSIM performance, and perceived image quality without a fully sampled ACS region. Finally, SPARK synergizes with non-Cartesian, 2D and 3D wave-encoding imaging by reducing RMSE between 20% and 25% and providing qualitative improvements. CONCLUSION SPARK synergizes with physics-based acquisition and reconstruction techniques to improve accelerated MRI by training scan-specific models to estimate and correct reconstruction errors in k-space.
Collapse
Affiliation(s)
- Yamin Arefeen
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Onur Beker
- Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Jaejin Cho
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
| | - Heng Yu
- Department of Automation, Tsinghua University, Beijing, China
| | - Elfar Adalsteinsson
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
- Institute for Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Charlestown, MA, USA
- Harvard-MIT Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
37
|
Shan X, Yang J, Xu P, Hu L, Ge H. Deep neural networks for magnetic resonance elastography acceleration in thermal ablation monitoring. Med Phys 2022; 49:1803-1813. [PMID: 35061250 DOI: 10.1002/mp.15471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 12/03/2021] [Accepted: 12/31/2021] [Indexed: 11/12/2022] Open
Abstract
PURPOSE To develop a deep neural network for accelerating magnetic resonance elastography (MRE) acquisition, to validate the ability to generate reliable MRE results with the down-sampled k-space data, and to demonstrate the feasibility of the proposed method in monitoring the stiffness changes during thermal ablation in a phantom study. MATERIALS AND METHODS MRE scans were performed with 60 Hz excitation on porcine ex-vivo liver gel phantoms in a 0.36T MRI scanner to generate the training dataset. The acquisition protocol was based on a spin-echo MRE pulse sequence with tailored motion-sensitive gradients to reduce echo time (TE). A U-Net based deep neural network was developed and trained to interpolate the missing data from down-sampled k-space. We calculated the errors of 80 sets magnitude/phase images reconstructed from the zero-filled, compressive sensing (CS) and deep learning (DL) method for comparison. The peak signal-to-noise rate (PSNR) and structural similarity (SSIM) of the magnitude/phase images were also calculated for comparison. The stiffness changes were recorded before, during, and after ablation. The mean stiffness values over the region of interest (ROI) were compared between the elastograms reconstructed from the fully-sampled k-space and interpolated k-space after thermal ablation. RESULTS The mean absolute error (MAE), PSNR, and SSIM of the proposed deep learning approach were significantly better than the results from the zero-filled method (p<0.0001) and CS (p<0.0001). The stiffness changes before and after thermal ablation assessed by the proposed approach (before: 7.7±1.1 kPa, after: 11.9±4.0 kPa, 4.2-kPa increase) gave close agreement with the values calculated from the fully-sampled data (before: 8.0±1.0 kPa, after: 12.6±4.2 kPa, 4.6-kPa increase). In contrast, the stiffness changes computed from the zero-filled method (before: 4.9±1.4 kPa, after: 5.6±2.8 kPa, 0.7-kPa increase) were substantially underestimated. CONCLUSION This study demonstrated the capability of the proposed deep learning method for rapid MRE acquisition and provided a promising solution for monitoring the MRI-guided thermal ablation. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Xiang Shan
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, Jiangsu, China
| | - Jinying Yang
- Laboratory Center for Information Science, University of Science and Technology of China, Hefei, Anhui, China
| | - Peng Xu
- Department of Radiology, Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu, China
| | - Liangliang Hu
- School of Instrument Science and Opto-electronics Engineering, Hefei University of Technology, Hefei, Anhui, China
| | - Haitao Ge
- School of Medical Imaging, Xuzhou Medical University, Xuzhou, Jiangsu, China
| |
Collapse
|
38
|
Evaluation on the generalization of a learned convolutional neural network for MRI reconstruction. Magn Reson Imaging 2021; 87:38-46. [PMID: 34968699 DOI: 10.1016/j.mri.2021.12.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2021] [Revised: 11/25/2021] [Accepted: 12/22/2021] [Indexed: 02/01/2023]
Abstract
Recently, deep learning approaches with various network architectures have drawn significant attention from the magnetic resonance imaging (MRI) community because of their great potential for image reconstruction from undersampled k-space data in fast MRI. However, the robustness of a trained network when applied to test data deviated from training data is still an important open question. In this work, we focus on quantitatively evaluating the influence of image contrast, human anatomy, sampling pattern, undersampling factor, and noise level on the generalization of a trained network composed by a cascade of several CNNs and a data consistency layer, called a deep cascade of convolutional neural network (DC-CNN). The DC-CNN is trained from datasets with different image contrast, human anatomy, sampling pattern, undersampling factor, and noise level, and then applied to test datasets consistent or inconsistent with the training datasets to assess the generalizability of the learned DC-CNN network. The results of our experiments show that reconstruction quality from the DC-CNN network is highly sensitive to sampling pattern, undersampling factor, and noise level, which are closely related to signal-to-noise ratio (SNR), and is relatively less sensitive to the image contrast. We also show that a deviation of human anatomy between training and test data leads to a substantial reduction of image quality for the brain dataset, whereas comparable performance for the chest and knee dataset having fewer anatomy details than brain images. This work further provides some empirical understanding of the generalizability of trained networks when there are deviations between training and test data. It also demonstrates the potential of transfer learning for image reconstruction from datasets different from those used in training the network.
Collapse
|
39
|
Chang Y, Saritac M. Group feature selection for enhancing information gain in MRI reconstruction. Phys Med Biol 2021; 67. [PMID: 34933300 DOI: 10.1088/1361-6560/ac4561] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 12/21/2021] [Indexed: 11/12/2022]
Abstract
Magnetic resonance imaging (MRI) has revolutionized the radiology. As a leading medical imaging modality, MRI not only visualizes the structures inside body, but also produces functional imaging. However, due to the slow imaging speed constrained by the MR physics, MRI cost is expensive, and patient may feel not comfortable in a scanner for a long time. Parallel MRI has accelerated the imaging speed through the sub-Nyquist sampling strategy and the missing data are interpolated by the multiple coil data acquired. Kernel learning has been used in the parallel MRI reconstruction to learn the interpolation weights and re-construct the undersampled data. However, noise and aliasing artifacts still exist in the reconstructed image and a large number of auto-calibration signal lines are needed. To further improve the kernel learning-based MRI reconstruction and accelerate the speed, this paper proposes a group feature selection strategy to improve the learning performance and enhance the reconstruction quality. An explicit kernel mapping is used for selecting a subset of features which contribute most to estimate the missing k-space data. The experimental results show that the learning behaviours can be better predicted and therefore the reconstructed image quality is improved.
Collapse
Affiliation(s)
- Yuchou Chang
- Computer and Information Science, University of Massachusetts Dartmouth, Dartmouth, Massachusetts, 02747, UNITED STATES
| | - Mert Saritac
- Computer and Information Science, University of Massachusetts Dartmouth, Dartmouth, Dartmouth, Massachusetts, 02747, UNITED STATES
| |
Collapse
|
40
|
Wang S, Cao G, Wang Y, Liao S, Wang Q, Shi J, Li C, Shen D. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. FRONTIERS IN RADIOLOGY 2021; 1:781868. [PMID: 37492170 PMCID: PMC10365109 DOI: 10.3389/fradi.2021.781868] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Accepted: 11/08/2021] [Indexed: 07/27/2023]
Abstract
Artificial intelligence (AI) as an emerging technology is gaining momentum in medical imaging. Recently, deep learning-based AI techniques have been actively investigated in medical imaging, and its potential applications range from data acquisition and image reconstruction to image analysis and understanding. In this review, we focus on the use of deep learning in image reconstruction for advanced medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). Particularly, recent deep learning-based methods for image reconstruction will be emphasized, in accordance with their methodology designs and performances in handling volumetric imaging data. It is expected that this review can help relevant researchers understand how to adapt AI for medical imaging and which advantages can be achieved with the assistance of AI.
Collapse
Affiliation(s)
- Shanshan Wang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
- Pengcheng Laboratrory, Shenzhen, China
| | - Guohua Cao
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Yan Wang
- School of Computer Science, Sichuan University, Chengdu, China
| | - Shu Liao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Qian Wang
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
| | - Jun Shi
- School of Communication and Information Engineering, Shanghai University, Shanghai, China
| | - Cheng Li
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences (CAS), Shenzhen, China
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech University, Shanghai, China
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| |
Collapse
|
41
|
Cheng J, Cui ZX, Huang W, Ke Z, Ying L, Wang H, Zhu Y, Liang D. Learning Data Consistency and its Application to Dynamic MR Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3140-3153. [PMID: 34252025 DOI: 10.1109/tmi.2021.3096232] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Magnetic resonance (MR) image reconstruction from undersampled k-space data can be formulated as a minimization problem involving data consistency and image prior. Existing deep learning (DL)-based methods for MR reconstruction employ deep networks to exploit the prior information and integrate the prior knowledge into the reconstruction under the explicit constraint of data consistency, without considering the real distribution of the noise. In this work, we propose a new DL-based approach termed Learned DC that implicitly learns the data consistency with deep networks, corresponding to the actual probability distribution of system noise. The data consistency term and the prior knowledge are both embedded in the weights of the networks, which provides an utterly implicit manner of learning reconstruction model. We evaluated the proposed approach with highly undersampled dynamic data, including the dynamic cardiac cine data with up to 24-fold acceleration and dynamic rectum data with the acceleration factor equal to the number of phases. Experimental results demonstrate the superior performance of the Learned DC both quantitatively and qualitatively than the state-of-the-art.
Collapse
|
42
|
Deep low-Rank plus sparse network for dynamic MR imaging. Med Image Anal 2021; 73:102190. [PMID: 34340107 DOI: 10.1016/j.media.2021.102190] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 07/15/2021] [Accepted: 07/16/2021] [Indexed: 11/22/2022]
Abstract
In dynamic magnetic resonance (MR) imaging, low-rank plus sparse (L+S) decomposition, or robust principal component analysis (PCA), has achieved stunning performance. However, the selection of the parameters of L+S is empirical, and the acceleration rate is limited, which are common failings of iterative compressed sensing MR imaging (CS-MRI) reconstruction methods. Many deep learning approaches have been proposed to address these issues, but few of them use a low-rank prior. In this paper, a model-based low-rank plus sparse network, dubbed L+S-Net, is proposed for dynamic MR reconstruction. In particular, we use an alternating linearized minimization method to solve the optimization problem with low-rank and sparse regularization. Learned soft singular value thresholding is introduced to ensure the clear separation of the L component and S component. Then, the iterative steps are unrolled into a network in which the regularization parameters are learnable. We prove that the proposed L+S-Net achieves global convergence under two standard assumptions. Experiments on retrospective and prospective cardiac cine datasets show that the proposed model outperforms state-of-the-art CS and existing deep learning methods and has great potential for extremely high acceleration factors (up to 24×).
Collapse
|
43
|
Ali MM, Paul BK, Ahmed K, Bui FM, Quinn JMW, Moni MA. Heart disease prediction using supervised machine learning algorithms: Performance analysis and comparison. Comput Biol Med 2021; 136:104672. [PMID: 34315030 DOI: 10.1016/j.compbiomed.2021.104672] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2021] [Revised: 07/19/2021] [Accepted: 07/19/2021] [Indexed: 10/20/2022]
Abstract
Machine learning and data mining-based approaches to prediction and detection of heart disease would be of great clinical utility, but are highly challenging to develop. In most countries there is a lack of cardiovascular expertise and a significant rate of incorrectly diagnosed cases which could be addressed by developing accurate and efficient early-stage heart disease prediction by analytical support of clinical decision-making with digital patient records. This study aimed to identify machine learning classifiers with the highest accuracy for such diagnostic purposes. Several supervised machine-learning algorithms were applied and compared for performance and accuracy in heart disease prediction. Feature importance scores for each feature were estimated for all applied algorithms except MLP and KNN. All the features were ranked based on the importance score to find those giving high heart disease predictions. This study found that using a heart disease dataset collected from Kaggle three-classification based on k-nearest neighbor (KNN), decision tree (DT) and random forests (RF) algorithms the RF method achieved 100% accuracy along with 100% sensitivity and specificity. Thus, we found that a relatively simple supervised machine learning algorithm can be used to make heart disease predictions with very high accuracy and excellent potential utility.
Collapse
Affiliation(s)
- Md Mamun Ali
- Department of Software Engineering (SWE), Daffodil International University (DIU), Sukrabad, Dhaka, 1207, Bangladesh
| | - Bikash Kumar Paul
- Department of Software Engineering (SWE), Daffodil International University (DIU), Sukrabad, Dhaka, 1207, Bangladesh; Group of Biophotomatiχ, Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Santosh, Tangail-1902, Bangladesh; Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Santosh, Tangail, 1902, Bangladesh
| | - Kawsar Ahmed
- Group of Biophotomatiχ, Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Santosh, Tangail-1902, Bangladesh; Department of Information and Communication Technology, Mawlana Bhashani Science and Technology University, Santosh, Tangail, 1902, Bangladesh.
| | - Francis M Bui
- Department of Electrical and Computer Engineering, University of Saskatchewan, 57 Campus Drive, Saskatoon, SK S7N 5A9, Canada
| | - Julian M W Quinn
- Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia
| | - Mohammad Ali Moni
- Healthy Ageing Theme, Garvan Institute of Medical Research, Darlinghurst, NSW, 2010, Australia; WHO Collaborating Centre on EHealth, UNSW Digital Health, School of Public Health and Community Medicine, Faculty of Medicine, UNSW Sydney, NSW 2052, Australia.
| |
Collapse
|
44
|
Qin C, Duan J, Hammernik K, Schlemper J, Küstner T, Botnar R, Prieto C, Price AN, Hajnal JV, Rueckert D. Complementary time-frequency domain networks for dynamic parallel MR image reconstruction. Magn Reson Med 2021; 86:3274-3291. [PMID: 34254355 DOI: 10.1002/mrm.28917] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2020] [Revised: 06/10/2021] [Accepted: 06/14/2021] [Indexed: 11/11/2022]
Abstract
PURPOSE To introduce a novel deep learning-based approach for fast and high-quality dynamic multicoil MR reconstruction by learning a complementary time-frequency domain network that exploits spatiotemporal correlations simultaneously from complementary domains. THEORY AND METHODS Dynamic parallel MR image reconstruction is formulated as a multivariable minimization problem, where the data are regularized in combined temporal Fourier and spatial (x-f) domain as well as in spatiotemporal image (x-t) domain. An iterative algorithm based on variable splitting technique is derived, which alternates among signal de-aliasing steps in x-f and x-t spaces, a closed-form point-wise data consistency step and a weighted coupling step. The iterative model is embedded into a deep recurrent neural network which learns to recover the image via exploiting spatiotemporal redundancies in complementary domains. RESULTS Experiments were performed on two datasets of highly undersampled multicoil short-axis cardiac cine MRI scans. Results demonstrate that our proposed method outperforms the current state-of-the-art approaches both quantitatively and qualitatively. The proposed model can also generalize well to data acquired from a different scanner and data with pathologies that were not seen in the training set. CONCLUSION The work shows the benefit of reconstructing dynamic parallel MRI in complementary time-frequency domains with deep neural networks. The method can effectively and robustly reconstruct high-quality images from highly undersampled dynamic multicoil data ( 16 × and 24 × yielding 15 s and 10 s scan times respectively) with fast reconstruction speed (2.8 seconds). This could potentially facilitate achieving fast single-breath-hold clinical 2D cardiac cine imaging.
Collapse
Affiliation(s)
- Chen Qin
- Institute for Digital Communications, School of Engineering, University of Edinburgh, Edinburgh, UK.,Department of Computing, Imperial College London, London, UK
| | - Jinming Duan
- School of Computer Science, University of Birmingham, Birmingham, UK
| | - Kerstin Hammernik
- Department of Computing, Imperial College London, London, UK.,Institute for AI and Informatics, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| | - Jo Schlemper
- Department of Computing, Imperial College London, London, UK.,Hyperfine Research Inc., Guilford, CT, USA
| | - Thomas Küstner
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK.,Department of Diagnostic and Interventional Radiology, Medical Image and Data Analysis, University Hospital of Tuebingen, Tuebingen, Germany
| | - René Botnar
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Claudia Prieto
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Anthony N Price
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Joseph V Hajnal
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| | - Daniel Rueckert
- Department of Computing, Imperial College London, London, UK.,Institute for AI and Informatics, Klinikum rechts der Isar, Technical University of Munich, Munich, Germany
| |
Collapse
|
45
|
Ouchi S, Ito S. Reconstruction of Compressed-sensing MR Imaging Using Deep Residual Learning in the Image Domain. Magn Reson Med Sci 2021; 20:190-203. [PMID: 32611937 PMCID: PMC8203484 DOI: 10.2463/mrms.mp.2019-0139] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose: A deep residual learning convolutional neural network (DRL-CNN) was applied to improve image quality and speed up the reconstruction of compressed sensing magnetic resonance imaging. The reconstruction performances of the proposed method was compared with iterative reconstruction methods. Methods: The proposed method adopted a DRL-CNN to learn the residual component between the input and output images (i.e., aliasing artifacts) for image reconstruction. The CNN-based reconstruction was compared with iterative reconstruction methods. To clarify the reconstruction performance of the proposed method, reconstruction experiments using 1D-, 2D-random under-sampling and sampling patterns that mix random and non-random under-sampling were executed. The peak-signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) were examined for various numbers of training images, sampling rates, and numbers of training epochs. Results: The experimental results demonstrated that reconstruction time is drastically reduced to 0.022 s per image compared with that for conventional iterative reconstruction. The PSNR and SSIM were improved as the coherence of the sampling pattern increases. These results indicate that a deep CNN can learn coherent artifacts and is effective especially for cases where the randomness of k-space sampling is rather low. Simulation studies showed that variable density non-random under-sampling was a promising sampling pattern in 1D-random under-sampling of 2D image acquisition. Conclusion: A DRL-CNN can recognize and predict aliasing artifacts with low incoherence. It was demonstrated that reconstruction time is significantly reduced and the improvement in the PSNR and SSIM is higher in 1D-random under-sampling than in 2D. The requirement of incoherence for aliasing artifacts is different from that for iterative reconstruction.
Collapse
Affiliation(s)
- Shohei Ouchi
- Department of Innovation Systems Engineering, Graduate School of Engineering, Utsunomiya University
| | - Satoshi Ito
- Department of Innovation Systems Engineering, Graduate School of Engineering, Utsunomiya University
| |
Collapse
|
46
|
Development of machine learning model for diagnostic disease prediction based on laboratory tests. Sci Rep 2021; 11:7567. [PMID: 33828178 PMCID: PMC8026627 DOI: 10.1038/s41598-021-87171-5] [Citation(s) in RCA: 49] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Accepted: 03/19/2021] [Indexed: 01/16/2023] Open
Abstract
The use of deep learning and machine learning (ML) in medical science is increasing, particularly in the visual, audio, and language data fields. We aimed to build a new optimized ensemble model by blending a DNN (deep neural network) model with two ML models for disease prediction using laboratory test results. 86 attributes (laboratory tests) were selected from datasets based on value counts, clinical importance-related features, and missing values. We collected sample datasets on 5145 cases, including 326,686 laboratory test results. We investigated a total of 39 specific diseases based on the International Classification of Diseases, 10th revision (ICD-10) codes. These datasets were used to construct light gradient boosting machine (LightGBM) and extreme gradient boosting (XGBoost) ML models and a DNN model using TensorFlow. The optimized ensemble model achieved an F1-score of 81% and prediction accuracy of 92% for the five most common diseases. The deep learning and ML models showed differences in predictive power and disease classification patterns. We used a confusion matrix and analyzed feature importance using the SHAP value method. Our new ML model achieved high efficiency of disease prediction through classification of diseases. This study will be useful in the prediction and diagnosis of diseases.
Collapse
|
47
|
Lee J, Kim B, Park H. MC 2 -Net: motion correction network for multi-contrast brain MRI. Magn Reson Med 2021; 86:1077-1092. [PMID: 33720462 DOI: 10.1002/mrm.28719] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2020] [Revised: 12/29/2020] [Accepted: 01/15/2021] [Indexed: 11/06/2022]
Abstract
PURPOSE A motion-correction network for multi-contrast brain MRI is proposed to correct in-plane rigid motion artifacts in brain MR images using deep learning. METHOD The proposed method consists of 2 parts: image alignment and motion correction. Alignment of multi-contrast MR images is performed in an unsupervised manner by a CNN work, yielding transformation parameters to align input images in order to minimize the normalized cross-correlation loss among multi-contrast images. Then, fine-tuning for image alignment is performed by maximizing the normalized mutual information. The motion correction network corrects motion artifacts in the aligned multi-contrast images. The correction network is trained to minimize the structural similarity loss and the VGG loss in a supervised manner. All datasets of motion-corrupted images are generated using motion simulation based on MR physics. RESULTS A motion-correction network for multi-contrast brain MRI successfully corrected artifacts of simulated motion for 4 test subjects, showing 0.96%, 7.63%, and 5.03% increases in the average structural simularity and 5.19%, 10.2%, and 7.48% increases in the average normalized mutual information for T1 -weighted, T2 -weighted, and T2 -weighted fluid-attenuated inversion recovery images, respectively. The experimental setting with image alignment and artifact-free input images for other contrasts shows better performances in correction of simulated motion artifacts. Furthermore, the proposed method quantitatively outperforms recent deep learning motion correction and synthesis methods. Real motion experiments from 5 healthy subjects demonstrate the potential of the proposed method for use in a clinical environment. CONCLUSION A deep learning-based motion correction method for multi-contrast MRI was successfully developed, and experimental results demonstrate the validity of the proposed method.
Collapse
Affiliation(s)
- Jongyeon Lee
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Byungjai Kim
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - HyunWook Park
- Department of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| |
Collapse
|
48
|
Montalt-Tordera J, Muthurangu V, Hauptmann A, Steeden JA. Machine learning in Magnetic Resonance Imaging: Image reconstruction. Phys Med 2021; 83:79-87. [DOI: 10.1016/j.ejmp.2021.02.020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 02/23/2021] [Indexed: 12/27/2022] Open
|
49
|
Huang MX, Huang CW, Harrington DL, Robb-Swan A, Angeles-Quinto A, Nichols S, Huang JW, Le L, Rimmele C, Matthews S, Drake A, Song T, Ji Z, Cheng CK, Shen Q, Foote E, Lerman I, Yurgil KA, Hansen HB, Naviaux RK, Dynes R, Baker DG, Lee RR. Resting-state magnetoencephalography source magnitude imaging with deep-learning neural network for classification of symptomatic combat-related mild traumatic brain injury. Hum Brain Mapp 2021; 42:1987-2004. [PMID: 33449442 PMCID: PMC8046098 DOI: 10.1002/hbm.25340] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2020] [Revised: 11/16/2020] [Accepted: 12/23/2020] [Indexed: 12/20/2022] Open
Abstract
Combat‐related mild traumatic brain injury (cmTBI) is a leading cause of sustained physical, cognitive, emotional, and behavioral disabilities in Veterans and active‐duty military personnel. Accurate diagnosis of cmTBI is challenging since the symptom spectrum is broad and conventional neuroimaging techniques are insensitive to the underlying neuropathology. The present study developed a novel deep‐learning neural network method, 3D‐MEGNET, and applied it to resting‐state magnetoencephalography (rs‐MEG) source‐magnitude imaging data from 59 symptomatic cmTBI individuals and 42 combat‐deployed healthy controls (HCs). Analytic models of individual frequency bands and all bands together were tested. The All‐frequency model, which combined delta‐theta (1–7 Hz), alpha (8–12 Hz), beta (15–30 Hz), and gamma (30–80 Hz) frequency bands, outperformed models based on individual bands. The optimized 3D‐MEGNET method distinguished cmTBI individuals from HCs with excellent sensitivity (99.9 ± 0.38%) and specificity (98.9 ± 1.54%). Receiver‐operator‐characteristic curve analysis showed that diagnostic accuracy was 0.99. The gamma and delta‐theta band models outperformed alpha and beta band models. Among cmTBI individuals, but not controls, hyper delta‐theta and gamma‐band activity correlated with lower performance on neuropsychological tests, whereas hypo alpha and beta‐band activity also correlated with lower neuropsychological test performance. This study provides an integrated framework for condensing large source‐imaging variable sets into optimal combinations of regions and frequencies with high diagnostic accuracy and cognitive relevance in cmTBI. The all‐frequency model offered more discriminative power than each frequency‐band model alone. This approach offers an effective path for optimal characterization of behaviorally relevant neuroimaging features in neurological and psychiatric disorders.
Collapse
Affiliation(s)
- Ming-Xiong Huang
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Radiology, University of California, San Diego, California, USA
| | - Charles W Huang
- Department of Bioengineering, Stanford University, Stanford, California, USA
| | - Deborah L Harrington
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Radiology, University of California, San Diego, California, USA
| | - Ashley Robb-Swan
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Radiology, University of California, San Diego, California, USA
| | - Annemarie Angeles-Quinto
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Radiology, University of California, San Diego, California, USA
| | - Sharon Nichols
- Department of Neurosciences, University of California, San Diego, California, USA
| | - Jeffrey W Huang
- Department of Computer Science, Columbia University, New York, New York, USA
| | - Lu Le
- ASPIRE Center, VASDHS Residential Rehabilitation Treatment Program, San Diego, California, USA
| | - Carl Rimmele
- ASPIRE Center, VASDHS Residential Rehabilitation Treatment Program, San Diego, California, USA
| | - Scott Matthews
- ASPIRE Center, VASDHS Residential Rehabilitation Treatment Program, San Diego, California, USA
| | - Angela Drake
- Cedar Sinai Medical Group Chronic Pain Program, Beverly Hills, California, USA
| | - Tao Song
- Department of Radiology, University of California, San Diego, California, USA
| | - Zhengwei Ji
- Department of Radiology, University of California, San Diego, California, USA
| | - Chung-Kuan Cheng
- Department of Computer Science and Engineering, University of California, San Diego, California, USA
| | - Qian Shen
- Department of Radiology, University of California, San Diego, California, USA
| | - Ericka Foote
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA
| | - Imanuel Lerman
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA
| | - Kate A Yurgil
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Psychological Sciences, Loyola University New Orleans, Louisiana, USA
| | - Hayden B Hansen
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA
| | - Robert K Naviaux
- Department of Medicine, University of California, San Diego, California, USA.,Department of Pediatrics, University of California, San Diego, California, USA.,Department of Pathology, University of California, San Diego, California, USA
| | - Robert Dynes
- Department of Physics, University of California, San Diego, California, USA
| | - Dewleen G Baker
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,VA Center of Excellence for Stress and Mental Health, San Diego, California, USA.,Department of Psychiatry, University of California, San Diego, California, USA
| | - Roland R Lee
- Radiology, Research, and Psychiatry Services, VA San Diego Healthcare System, San Diego, California, USA.,Department of Radiology, University of California, San Diego, California, USA
| |
Collapse
|
50
|
Cha E, Chung H, Kim EY, Ye JC. Unpaired Training of Deep Learning tMRA for Flexible Spatio-Temporal Resolution. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:166-179. [PMID: 32915733 DOI: 10.1109/tmi.2020.3023620] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Time-resolved MR angiography (tMRA) has been widely used for dynamic contrast enhanced MRI (DCE-MRI) due to its highly accelerated acquisition. In tMRA, the periphery of the k -space data are sparsely sampled so that neighbouring frames can be merged to construct one temporal frame. However, this view-sharing scheme fundamentally limits the temporal resolution, and it is not possible to change the view-sharing number to achieve different spatio-temporal resolution trade-offs. Although many deep learning approaches have been recently proposed for MR reconstruction from sparse samples, the existing approaches usually require matched fully sampled k -space reference data for supervised training, which is not suitable for tMRA due to the lack of high spatio-temporal resolution ground-truth images. To address this problem, here we propose a novel unpaired training scheme for deep learning using optimal transport driven cycle-consistent generative adversarial network (cycleGAN). In contrast to the conventional cycleGAN with two pairs of generator and discriminator, the new architecture requires just a single pair of generator and discriminator, which makes the training much simpler but still improves the performance. Reconstruction results using in vivo tMRA and simulation data set confirm that the proposed method can immediately generate high quality reconstruction results at various choices of view-sharing numbers, allowing us to exploit better trade-off between spatial and temporal resolution in time-resolved MR angiography.
Collapse
|