51
|
Ghodrati V, Bydder M, Bedayat A, Prosper A, Yoshida T, Nguyen KL, Finn JP, Hu P. Temporally aware volumetric generative adversarial network-based MR image reconstruction with simultaneous respiratory motion compensation: Initial feasibility in 3D dynamic cine cardiac MRI. Magn Reson Med 2021; 86:2666-2683. [PMID: 34254363 PMCID: PMC10172149 DOI: 10.1002/mrm.28912] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 06/02/2021] [Accepted: 06/12/2021] [Indexed: 12/26/2022]
Abstract
PURPOSE Develop a novel three-dimensional (3D) generative adversarial network (GAN)-based technique for simultaneous image reconstruction and respiratory motion compensation of 4D MRI. Our goal was to enable high-acceleration factors 10.7X-15.8X, while maintaining robust and diagnostic image quality superior to state-of-the-art self-gating (SG) compressed sensing wavelet (CS-WV) reconstruction at lower acceleration factors 3.5X-7.9X. METHODS Our GAN was trained based on pixel-wise content loss functions, adversarial loss function, and a novel data-driven temporal aware loss function to maintain anatomical accuracy and temporal coherence. Besides image reconstruction, our network also performs respiratory motion compensation for free-breathing scans. A novel progressive growing-based strategy was adapted to make the training process possible for the proposed GAN-based structure. The proposed method was developed and thoroughly evaluated qualitatively and quantitatively based on 3D cardiac cine data from 42 patients. RESULTS Our proposed method achieved significantly better scores in general image quality and image artifacts at 10.7X-15.8X acceleration than the SG CS-WV approach at 3.5X-7.9X acceleration (4.53 ± 0.540 vs. 3.13 ± 0.681 for general image quality, 4.12 ± 0.429 vs. 2.97 ± 0.434 for image artifacts, P < .05 for both). No spurious anatomical structures were observed in our images. The proposed method enabled similar cardiac-function quantification as conventional SG CS-WV. The proposed method achieved faster central processing unit-based image reconstruction (6 s/cardiac phase) than the SG CS-WV (312 s/cardiac phase). CONCLUSION The proposed method showed promising potential for high-resolution (1 mm3 ) free-breathing 4D MR data acquisition with simultaneous respiratory motion compensation and fast reconstruction time.
Collapse
Affiliation(s)
- Vahid Ghodrati
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| | - Mark Bydder
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Arash Bedayat
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Ashley Prosper
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Takegawa Yoshida
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Kim-Lien Nguyen
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA.,Department of Medicine (Cardiology), David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - J Paul Finn
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA
| | - Peng Hu
- Department of Radiological Sciences, David Geffen School of Medicine, University of California, Los Angeles, California, USA.,Biomedical Physics Inter-Departmental Graduate Program, University of California, Los Angeles, California, USA
| |
Collapse
|
52
|
Abstract
Clinical MRI systems have continually improved over the years since their introduction in the 1980s. In MRI technical development, the developments in each MRI system component, including data acquisition, image reconstruction, and hardware systems, have impacted the others. Progress in each component has induced new technology development opportunities in other components. New technologies outside of the MRI field, for example, computer science, data processing, and semiconductors, have been immediately incorporated into MRI development, which resulted in innovative applications. With high performance computing and MR technology innovations, MRI can now provide large volumes of functional and anatomical image datasets, which are important tools in various research fields. MRI systems are now combined with other modalities, such as positron emission tomography (PET) or therapeutic devices. These hybrid systems provide additional capabilities. In this review, MRI advances in the last two decades will be considered. We will discuss the progress of MRI systems, the enabling technology, established applications, current trends, and the future outlook.
Collapse
Affiliation(s)
- Hiroyuki Kabasawa
- Department of Radiological Sciences, School of Health Sciences at Narita, International University of Health and Welfare
| |
Collapse
|
53
|
Khalid WB, Farhat N, Lavery L, Jarnagin J, Delany JP, Kim K. Non-invasive Assessment of Liver Fat in ob/ob Mice Using Ultrasound-Induced Thermal Strain Imaging and Its Correlation with Hepatic Triglyceride Content. ULTRASOUND IN MEDICINE & BIOLOGY 2021; 47:1067-1076. [PMID: 33468357 PMCID: PMC7936391 DOI: 10.1016/j.ultrasmedbio.2020.12.014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Revised: 11/21/2020] [Accepted: 12/12/2020] [Indexed: 06/12/2023]
Abstract
Non-alcoholic fatty liver disease is the accumulation of triglycerides in liver. In its malignant form, it can proceed to steatohepatitis, fibrosis, cirrhosis, cancer and ultimately liver impairment, leading to liver transplantation. In a previous study, ultrasound-induced thermal strain imaging (US-TSI) was used to distinguish between excised fatty livers from obese mice and non-fatty livers from control mice. In this study, US-TSI was used to quantify lipid composition of fatty livers in ob/ob mice (n = 28) at various steatosis stages. A strong correlation coefficient was observed (R2 = 0.85) between lipid composition measured with US-TSI and hepatic triglyceride content. Hepatic triglyceride content is used to quantify adipose tissue in liver. The ob/ob mice were divided into three groups based on the degree of steatosis that is used in clinics: none, mild and moderate. A non-parametric Kruskal-Wallis test was conducted to determine if US-TSI can potentially differentiate among the steatosis grades in non-alcoholic fatty liver disease.
Collapse
Affiliation(s)
- Waqas B Khalid
- Department of Bioengineering, University of Pittsburgh School of Engineering, Pittsburgh, Pennsylvania, USA
| | - Nadim Farhat
- Department of Bioengineering, University of Pittsburgh School of Engineering, Pittsburgh, Pennsylvania, USA
| | - Linda Lavery
- Center for Ultrasound Molecular Imaging and Therapeutics, Department of Medicine, University of Pittsburgh School of Medicine, Heart and Vascular Institute, University of Pittsburgh Medical Center
| | - Josh Jarnagin
- Department of Medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - James P Delany
- Department of Medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA
| | - Kang Kim
- Department of Bioengineering, University of Pittsburgh School of Engineering, Pittsburgh, Pennsylvania, USA; Center for Ultrasound Molecular Imaging and Therapeutics, Department of Medicine, University of Pittsburgh School of Medicine, Heart and Vascular Institute, University of Pittsburgh Medical Center; Department of Medicine, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA; Department of Mechanical Engineering and Materials Science, University of Pittsburgh School of Engineering, Pittsburgh, Pennsylvania, USA; McGowan Institute for Regenerative Medicine, University of Pittsburgh and University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, USA.
| |
Collapse
|
54
|
Liu S, Thung KH, Qu L, Lin W, Shen D, Yap PT. Learning MRI artefact removal with unpaired data. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-020-00270-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
55
|
Shimohira M, Kiyosue H, Osuga K, Gobara H, Kondo H, Nakazawa T, Matsui Y, Hamamoto K, Ishiguro T, Maruno M, Sugimoto K, Koganemaru M, Kitagawa A, Yamakado K. Location of embolization affects patency after coil embolization for pulmonary arteriovenous malformations: importance of time-resolved magnetic resonance angiography for diagnosis of patency. Eur Radiol 2021; 31:5409-5420. [PMID: 33449178 DOI: 10.1007/s00330-020-07669-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 11/13/2020] [Accepted: 12/23/2020] [Indexed: 11/26/2022]
Abstract
OBJECTIVES This study aimed to assess the diagnostic accuracy of computed tomography (CT) and time-resolved magnetic resonance angiography (TR-MRA) for patency after coil embolization of pulmonary arteriovenous malformations (PAVMs) and identify factors affecting patency. METHODS Data from the records of 205 patients with 378 untreated PAVMs were retrospectively analyzed. Differences in proportional reduction of the sac or draining vein on CT between occluded and patent PAVMs were examined, and receiver operating characteristic analysis was performed to assess the accuracy of CT using digital subtraction angiography (DSA) as the definitive diagnostic modality. The accuracy of TR-MRA was also assessed in comparison to DSA. Potential factors affecting patency, including sex, age, number of PAVMs, location of PAVMs, type of PAVM, and location of embolization, were evaluated. RESULTS The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of CT were 82%, 81%, 77%, 85%, and 82%, respectively, when the reduction rate threshold was set to 55%, which led to the highest diagnostic accuracy. The sensitivity, specificity, PPV, NPV, and accuracy of TR-MRA were 89%, 95%, 89%, 95%, and 93%, respectively. On both univariable and multivariable analyses, embolization of the distal position to the last normal branch of the pulmonary artery was a factor that significantly affected the prevention of patency. CONCLUSIONS TR-MRA appears to be an appropriate method for follow-up examinations due to its high accuracy for the diagnosis of patency after coil embolization of PAVMs. The location of embolization is a factor affecting patency. KEY POINTS • Diagnosis of patency after coil embolization for pulmonary arteriovenous malformations (PAVMs) is important because a patent PAVM can lead to neurologic complications. • The diagnostic accuracies of CT with a cutoff value of 55% and TR-MRA were 82% and 93%, respectively. • The positioning of the coils relative to the sac and the last normal branch of the artery was significant for preventing PAVM patency.
Collapse
Affiliation(s)
- Masashi Shimohira
- Department of Radiology, Nagoya City University Graduate School of Medical Sciences, Nagoya, 467-8601, Japan.
| | - Hiro Kiyosue
- Department of Radiology, Oita University, Yufu, Japan
| | - Keigo Osuga
- Department of Diagnostic and Interventional Radiology, Osaka University Graduate School of Medicine, Suita, Japan
- Department of Diagnostic Radiology, Osaka Medical College, Takatsuki, Japan
| | - Hideo Gobara
- Department of Radiology, Okayama University Medical School, Okayama, Japan
| | - Hiroshi Kondo
- Department of Radiology, Teikyo University School of Medicine, Itabashi, Tokyo, Japan
| | - Tetsuro Nakazawa
- Department of Diagnostic and Interventional Radiology, Osaka University Graduate School of Medicine, Suita, Japan
- Department of Diagnostic Imaging, Osaka General Medical Center, Osaka, Japan
| | - Yusuke Matsui
- Department of Radiology, Okayama University Medical School, Okayama, Japan
| | - Kohei Hamamoto
- Department of Radiology, Jichi Medical University, Saitama Medical Center, Saitama, Japan
| | - Tomoya Ishiguro
- Department of Neuro-Intervention, Osaka City General Hospital, Osaka, Japan
| | - Miyuki Maruno
- Department of Radiology, Oita University, Yufu, Japan
| | - Koji Sugimoto
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Japan
| | | | - Akira Kitagawa
- Department of Radiology, Aichi Medical University, Nagakute, Japan
| | - Koichiro Yamakado
- Department of Radiology, Hyogo College of Medicine, Nishinomiya, Japan
| |
Collapse
|
56
|
Masoudi S, Harmon SA, Mehralivand S, Walker SM, Raviprakash H, Bagci U, Choyke PL, Turkbey B. Quick guide on radiology image pre-processing for deep learning applications in prostate cancer research. J Med Imaging (Bellingham) 2021; 8:010901. [PMID: 33426151 PMCID: PMC7790158 DOI: 10.1117/1.jmi.8.1.010901] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Accepted: 12/04/2020] [Indexed: 12/25/2022] Open
Abstract
Purpose: Deep learning has achieved major breakthroughs during the past decade in almost every field. There are plenty of publicly available algorithms, each designed to address a different task of computer vision in general. However, most of these algorithms cannot be directly applied to images in the medical domain. Herein, we are focused on the required preprocessing steps that should be applied to medical images prior to deep neural networks. Approach: To be able to employ the publicly available algorithms for clinical purposes, we must make a meaningful pixel/voxel representation from medical images which facilitates the learning process. Based on the ultimate goal expected from an algorithm (classification, detection, or segmentation), one may infer the required pre-processing steps that can ideally improve the performance of that algorithm. Required pre-processing steps for computed tomography (CT) and magnetic resonance (MR) images in their correct order are discussed in detail. We further supported our discussion by relevant experiments to investigate the efficiency of the listed preprocessing steps. Results: Our experiments confirmed how using appropriate image pre-processing in the right order can improve the performance of deep neural networks in terms of better classification and segmentation. Conclusions: This work investigates the appropriate pre-processing steps for CT and MR images of prostate cancer patients, supported by several experiments that can be useful for educating those new to the field (https://github.com/NIH-MIP/Radiology_Image_Preprocessing_for_Deep_Learning).
Collapse
Affiliation(s)
- Samira Masoudi
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| | - Stephanie A. Harmon
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| | - Sherif Mehralivand
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| | - Stephanie M. Walker
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| | - Harish Raviprakash
- National Institutes of Health, Department of Radiology and Imaging Sciences, Bethesda, Maryland, United States
| | - Ulas Bagci
- University of Central Florida, Orlando, Florida, United States
| | - Peter L. Choyke
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| | - Baris Turkbey
- National Cancer Institute, National Institutes of Health, Molecular Imaging Branch, Bethesda, Maryland, United States
| |
Collapse
|
57
|
van der Voort SR, Smits M, Klein S. DeepDicomSort: An Automatic Sorting Algorithm for Brain Magnetic Resonance Imaging Data. Neuroinformatics 2021; 19:159-184. [PMID: 32627144 PMCID: PMC7782469 DOI: 10.1007/s12021-020-09475-7] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
With the increasing size of datasets used in medical imaging research, the need for automated data curation is arising. One important data curation task is the structured organization of a dataset for preserving integrity and ensuring reusability. Therefore, we investigated whether this data organization step can be automated. To this end, we designed a convolutional neural network (CNN) that automatically recognizes eight different brain magnetic resonance imaging (MRI) scan types based on visual appearance. Thus, our method is unaffected by inconsistent or missing scan metadata. It can recognize pre-contrast T1-weighted (T1w),post-contrast T1-weighted (T1wC), T2-weighted (T2w), proton density-weighted (PDw) and derived maps (e.g. apparent diffusion coefficient and cerebral blood flow). In a first experiment,we used scans of subjects with brain tumors: 11065 scans of 719 subjects for training, and 2369 scans of 192 subjects for testing. The CNN achieved an overall accuracy of 98.7%. In a second experiment, we trained the CNN on all 13434 scans from the first experiment and tested it on 7227 scans of 1318 Alzheimer's subjects. Here, the CNN achieved an overall accuracy of 98.5%. In conclusion, our method can accurately predict scan type, and can quickly and automatically sort a brain MRI dataset virtually without the need for manual verification. In this way, our method can assist with properly organizing a dataset, which maximizes the shareability and integrity of the data.
Collapse
Affiliation(s)
- Sebastian R van der Voort
- Biomedical Imaging Group Rotterdam, Departments of Radiology and Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Centre Rotterdam, Rotterdam, The Netherlands.
| | - Marion Smits
- Department of Radiology and Nuclear Medicine, Erasmus MC - University Medical Centre Rotterdam, Rotterdam, The Netherlands
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Departments of Radiology and Nuclear Medicine and Medical Informatics, Erasmus MC - University Medical Centre Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
58
|
Deep Convolutional Encoder-Decoder algorithm for MRI brain reconstruction. Med Biol Eng Comput 2020; 59:85-106. [PMID: 33231848 DOI: 10.1007/s11517-020-02285-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 10/31/2020] [Indexed: 10/22/2022]
Abstract
Compressed Sensing Magnetic Resonance Imaging (CS-MRI) could be considered a challenged task since it could be designed as an efficient technique for fast MRI acquisition which could be highly beneficial for several clinical routines. In fact, it could grant better scan quality by reducing motion artifacts amount as well as the contrast washout effect. It offers also the possibility to reduce the exploration cost and the patient's anxiety. Recently, Deep Learning Neuronal Network (DL) has been suggested in order to reconstruct MRI scans with conserving the structural details and improving parallel imaging-based fast MRI. In this paper, we propose Deep Convolutional Encoder-Decoder architecture for CS-MRI reconstruction. Such architecture bridges the gap between the non-learning techniques, using data from only one image, and approaches using large training data. The proposed approach is based on autoencoder architecture divided into two parts: an encoder and a decoder. The encoder as well as the decoder has essentially three convolutional blocks. The proposed architecture has been evaluated through two databases: Hammersmith dataset (for the normal scans) and MICCAI 2018 (for pathological MRI). Moreover, we extend our model to cope with noisy pathological MRI scans. The normalized mean square error (NMSE), the peak-to-noise ratio (PSNR), and the structural similarity index (SSIM) have been adopted as evaluation metrics in order to evaluate the proposed architecture performance and to make a comparative study with the state-of-the-art reconstruction algorithms. The higher PSNR and SSIM values as well as the lowest NMSE values could attest that the proposed architecture offers better reconstruction and preserves textural image details. Furthermore, the running time is about 0.8 s, which is suitable for real-time processing. Such results could encourage the neurologist to adopt it in their clinical routines. Graphical abstract.
Collapse
|
59
|
An H, Shin HG, Ji S, Jung W, Oh S, Shin D, Park J, Lee J. DeepResp: Deep learning solution for respiration-induced B 0 fluctuation artifacts in multi-slice GRE. Neuroimage 2020; 224:117432. [PMID: 33038539 DOI: 10.1016/j.neuroimage.2020.117432] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2020] [Revised: 09/23/2020] [Accepted: 09/30/2020] [Indexed: 11/25/2022] Open
Abstract
Respiration-induced B0 fluctuation corrupts MRI images by inducing phase errors in k-space. A few approaches such as navigator have been proposed to correct for the artifacts at the expense of sequence modification. In this study, a new deep learning method, which is referred to as DeepResp, is proposed for reducing the respiration-artifacts in multi-slice gradient echo (GRE) images. DeepResp is designed to extract the respiration-induced phase errors from a complex image using deep neural networks. Then, the network-generated phase errors are applied to the k-space data, creating an artifact-corrected image. For network training, the computer-simulated images were generated using artifact-free images and respiration data. When evaluated, both simulated images and in-vivo images of two different breathing conditions (deep breathing and natural breathing) show improvements (simulation: normalized root-mean-square error (NRMSE) from 7.8 ± 5.2% to 1.3 ± 0.6%; structural similarity (SSIM) from 0.88 ± 0.08 to 0.99 ± 0.01; ghost-to-signal-ratio (GSR) from 7.9 ± 7.2% to 0.6 ± 0.6%; deep breathing: NRMSE from 13.9 ± 4.6% to 5.8 ± 1.4%; SSIM from 0.86 ± 0.03 to 0.95 ± 0.01; GSR 20.2 ± 10.2% to 5.7 ± 2.3%; natural breathing: NRMSE from 5.2 ± 3.3% to 4.0 ± 2.5%; SSIM from 0.94 ± 0.04 to 0.97 ± 0.02; GSR 5.7 ± 5.0% to 2.8 ± 1.1%). Our approach does not require any modification of the sequence or additional hardware, and may therefore find useful applications. Furthermore, the deep neural networks extract respiration-induced phase errors, which is more interpretable and reliable than results of end-to-end trained networks.
Collapse
Affiliation(s)
- Hongjun An
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Hyeong-Geol Shin
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Sooyeon Ji
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Woojin Jung
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Sehong Oh
- Division of Biomedical Engineering, Hankuk University of Foreign Studies, Gyeonggi-do, South Korea
| | - Dongmyung Shin
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Juhyung Park
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea
| | - Jongho Lee
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Seoul National University, Seoul, South Korea.
| |
Collapse
|
60
|
Steeden JA, Quail M, Gotschy A, Mortensen KH, Hauptmann A, Arridge S, Jones R, Muthurangu V. Rapid whole-heart CMR with single volume super-resolution. J Cardiovasc Magn Reson 2020; 22:56. [PMID: 32753047 PMCID: PMC7405461 DOI: 10.1186/s12968-020-00651-x] [Citation(s) in RCA: 39] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 05/17/2020] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Three-dimensional, whole heart, balanced steady state free precession (WH-bSSFP) sequences provide delineation of intra-cardiac and vascular anatomy. However, they have long acquisition times. Here, we propose significant speed-ups using a deep-learning single volume super-resolution reconstruction, to recover high-resolution features from rapidly acquired low-resolution WH-bSSFP images. METHODS A 3D residual U-Net was trained using synthetic data, created from a library of 500 high-resolution WH-bSSFP images by simulating 50% slice resolution and 50% phase resolution. The trained network was validated with 25 synthetic test data sets. Additionally, prospective low-resolution data and high-resolution data were acquired in 40 patients. In the prospective data, vessel diameters, quantitative and qualitative image quality, and diagnostic scoring was compared between the low-resolution, super-resolution and reference high-resolution WH-bSSFP data. RESULTS The synthetic test data showed a significant increase in image quality of the low-resolution images after super-resolution reconstruction. Prospectively acquired low-resolution data was acquired ~× 3 faster than the prospective high-resolution data (173 s vs 488 s). Super-resolution reconstruction of the low-resolution data took < 1 s per volume. Qualitative image scores showed super-resolved images had better edge sharpness, fewer residual artefacts and less image distortion than low-resolution images, with similar scores to high-resolution data. Quantitative image scores showed super-resolved images had significantly better edge sharpness than low-resolution or high-resolution images, with significantly better signal-to-noise ratio than high-resolution data. Vessel diameters measurements showed over-estimation in the low-resolution measurements, compared to the high-resolution data. No significant differences and no bias was found in the super-resolution measurements in any of the great vessels. However, a small but significant for the underestimation was found in the proximal left coronary artery diameter measurement from super-resolution data. Diagnostic scoring showed that although super-resolution did not improve accuracy of diagnosis, it did improve diagnostic confidence compared to low-resolution imaging. CONCLUSION This paper demonstrates the potential of using a residual U-Net for super-resolution reconstruction of rapidly acquired low-resolution whole heart bSSFP data within a clinical setting. We were able to train the network using synthetic training data from retrospective high-resolution whole heart data. The resulting network can be applied very quickly, making these techniques particularly appealing within busy clinical workflow. Thus, we believe that this technique may help speed up whole heart CMR in clinical practice.
Collapse
Affiliation(s)
- Jennifer A Steeden
- UCL Centre for Cardiovascular Imaging, Institute of Cardiovascular Science, University College London, 30 Guildford Street, London, WC1N 1EH, UK.
| | - Michael Quail
- UCL Centre for Cardiovascular Imaging, Institute of Cardiovascular Science, University College London, 30 Guildford Street, London, WC1N 1EH, UK
- Great Ormond Street Hospital, London, WC1N 3JH, UK
| | - Alexander Gotschy
- Great Ormond Street Hospital, London, WC1N 3JH, UK
- Institute for Biomedical Engineering, University and ETH Zurich, Zurich, Switzerland
| | | | - Andreas Hauptmann
- Department of Computer Science, University College London, London, WC1E 6BT, UK
- Research Unit of Mathematical Sciences, University of Oulu, Oulu, Finland
| | - Simon Arridge
- Department of Computer Science, University College London, London, WC1E 6BT, UK
| | - Rodney Jones
- UCL Centre for Cardiovascular Imaging, Institute of Cardiovascular Science, University College London, 30 Guildford Street, London, WC1N 1EH, UK
| | - Vivek Muthurangu
- UCL Centre for Cardiovascular Imaging, Institute of Cardiovascular Science, University College London, 30 Guildford Street, London, WC1N 1EH, UK
| |
Collapse
|
61
|
Nguyen XV, Oztek MA, Nelakurti DD, Brunnquell CL, Mossa-Basha M, Haynor DR, Prevedello LM. Applying Artificial Intelligence to Mitigate Effects of Patient Motion or Other Complicating Factors on Image Quality. Top Magn Reson Imaging 2020; 29:175-180. [PMID: 32511198 DOI: 10.1097/rmr.0000000000000249] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Artificial intelligence, particularly deep learning, offers several possibilities to improve the quality or speed of image acquisition in magnetic resonance imaging (MRI). In this article, we briefly review basic machine learning concepts and discuss commonly used neural network architectures for image-to-image translation. Recent examples in the literature describing application of machine learning techniques to clinical MR image acquisition or postprocessing are discussed. Machine learning can contribute to better image quality by improving spatial resolution, reducing image noise, and removing undesired motion or other artifacts. As patients occasionally are unable to tolerate lengthy acquisition times or gadolinium agents, machine learning can potentially assist MRI workflow and patient comfort by facilitating faster acquisitions or reducing exogenous contrast dosage. Although artificial intelligence approaches often have limitations, such as problems with generalizability or explainability, there is potential for these techniques to improve diagnostic utility, throughput, and patient experience in clinical MRI practice.
Collapse
Affiliation(s)
- Xuan V Nguyen
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH
| | - Murat Alp Oztek
- Department of Radiology, University of Washington School of Medicine, Seattle, WA
- Seattle Children's Hospital, Seattle, WA
| | - Devi D Nelakurti
- Metro Early College High School, The Ohio State University, Columbus, OH
| | | | - Mahmud Mossa-Basha
- Department of Radiology, University of Washington School of Medicine, Seattle, WA
| | - David R Haynor
- Department of Radiology, University of Washington School of Medicine, Seattle, WA
| | - Luciano M Prevedello
- Department of Radiology, The Ohio State University Wexner Medical Center, Columbus, OH
| |
Collapse
|
62
|
Werth K, Ledbetter L. Artificial Intelligence in Head and Neck Imaging: A Glimpse into the Future. Neuroimaging Clin N Am 2020; 30:359-368. [PMID: 32600636 DOI: 10.1016/j.nic.2020.04.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Artificial intelligence, specifically machine learning and deep learning, is a rapidly developing field in imaging sciences with the potential to improve the efficiency and effectiveness of radiologists. This review covers common technical terms and basic concepts in imaging artificial intelligence and briefly reviews the application of these techniques to general imaging as well as head and neck imaging. Artificial intelligence has the potential to contribute improvements to all areas of patient care, including image acquisition, processing, segmentation, automated detection of findings, integration of clinical information, quality improvement, and research. Numerous challenges remain, however, before widespread imaging clinical adoption and integration occur.
Collapse
Affiliation(s)
- Kyle Werth
- Department of Radiology, University of Kansas Medical Center, 3901 Rainbow Boulevard, Mailstop 4032, Kansas City, KS 66160, USA
| | - Luke Ledbetter
- Department of Radiology, David Geffen School of Medicine at UCLA, 757 Westwood Plaza, Suite 1621D, Los Angeles, CA 90095, USA.
| |
Collapse
|
63
|
Kromrey ML, Tamada D, Johno H, Funayama S, Nagata N, Ichikawa S, Kühn JP, Onishi H, Motosugi U. Reduction of respiratory motion artifacts in gadoxetate-enhanced MR with a deep learning-based filter using convolutional neural network. Eur Radiol 2020; 30:5923-5932. [PMID: 32556463 PMCID: PMC7651696 DOI: 10.1007/s00330-020-07006-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 04/17/2020] [Accepted: 06/04/2020] [Indexed: 12/13/2022]
Abstract
Objectives To reveal the utility of motion artifact reduction with convolutional neural network (MARC) in gadoxetate disodium–enhanced multi-arterial phase MRI of the liver. Methods This retrospective study included 192 patients (131 men, 68.7 ± 10.3 years) receiving gadoxetate disodium–enhanced liver MRI in 2017. Datasets were submitted to a newly developed filter (MARC), consisting of 7 convolutional layers, and trained on 14,190 cropped images generated from abdominal MR images. Motion artifact for training was simulated by adding periodic k-space domain noise to the images. Original and filtered images of pre-contrast and 6 arterial phases (7 image sets per patient resulting in 1344 sets in total) were evaluated regarding motion artifacts on a 4-point scale. Lesion conspicuity in original and filtered images was ranked by side-by-side comparison. Results Of the 1344 original image sets, motion artifact score was 2 in 597, 3 in 165, and 4 in 54 sets. MARC significantly improved image quality over all phases showing an average motion artifact score of 1.97 ± 0.72 compared to 2.53 ± 0.71 in original MR images (p < 0.001). MARC improved motion scores from 2 to 1 in 177/596 (29.65%), from 3 to 2 in 119/165 (72.12%), and from 4 to 3 in 34/54 sets (62.96%). Lesion conspicuity was significantly improved (p < 0.001) without removing anatomical details. Conclusions Motion artifacts and lesion conspicuity of gadoxetate disodium–enhanced arterial phase liver MRI were significantly improved by the MARC filter, especially in cases with substantial artifacts. This method can be of high clinical value in subjects with failing breath-hold in the scan. Key Points • This study presents a newly developed deep learning–based filter for artifact reduction using convolutional neural network (motion artifact reduction with convolutional neural network, MARC). • MARC significantly improved MR image quality after gadoxetate disodium administration by reducing motion artifacts, especially in cases with severely degraded images. • Postprocessing with MARC led to better lesion conspicuity without removing anatomical details.
Collapse
Affiliation(s)
- M-L Kromrey
- Department of Radiology, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi, 409-3898, Japan.
- Department of Diagnostic Radiology and Neuroradiology, University Medicine Greifswald, Greifswald, Germany.
| | - D Tamada
- Department of Radiology, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi, 409-3898, Japan
| | - H Johno
- Department of Radiology, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi, 409-3898, Japan
| | - S Funayama
- Department of Radiology, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi, 409-3898, Japan
| | - N Nagata
- Department of Radiology, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi, 409-3898, Japan
| | - S Ichikawa
- Department of Radiology, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi, 409-3898, Japan
| | - J-P Kühn
- Institute of Diagnostic and Interventional Radiology, University Medicine, Carl-Gustav Carus University, Dresden, Germany
| | - H Onishi
- Department of Radiology, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi, 409-3898, Japan
| | - U Motosugi
- Department of Radiology, University of Yamanashi, 1110 Shimokato, Chuo, Yamanashi, 409-3898, Japan
| |
Collapse
|
64
|
Liu J, Kocak M, Supanich M, Deng J. Motion artifacts reduction in brain MRI by means of a deep residual network with densely connected multi-resolution blocks (DRN-DCMB). Magn Reson Imaging 2020; 71:69-79. [PMID: 32428549 DOI: 10.1016/j.mri.2020.05.002] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Revised: 04/17/2020] [Accepted: 05/11/2020] [Indexed: 02/04/2023]
Abstract
OBJECTIVE Magnetic resonance imaging (MRI) acquisition is inherently sensitive to motion, and motion artifact reduction is essential for improving image quality in MRI. METHODS We developed a deep residual network with densely connected multi-resolution blocks (DRN-DCMB) model to reduce the motion artifacts in T1 weighted (T1W) spin echo images acquired on different imaging planes before and after contrast injection. The DRN-DCMB network consisted of multiple multi-resolution blocks connected with dense connections in a feedforward manner. A single residual unit was used to connect the input and output of the entire network with one shortcut connection to predict a residual image (i.e. artifact image). The model was trained with five motion-free T1W image stacks (pre-contrast axial and sagittal, and post-contrast axial, coronal, and sagittal images) with simulated motion artifacts. RESULTS In other 86 testing image stacks with simulated artifacts, our DRN-DCMB model outperformed other state-of-the-art deep learning models with significantly higher structural similarity index (SSIM) and improvement in signal-to-noise ratio (ISNR). The DRN-DCMB model was also applied to 121 testing image stacks appeared with various degrees of real motion artifacts. The acquired images and processed images by the DRN-DCMB model were randomly mixed, and image quality was blindly evaluated by a neuroradiologist. The DRN-DCMB model significantly improved the overall image quality, reduced the severity of the motion artifacts, and improved the image sharpness, while kept the image contrast. CONCLUSION Our DRN-DCMB model provided an effective method for reducing motion artifacts and improving the overall clinical image quality of brain MRI.
Collapse
Affiliation(s)
- Junchi Liu
- Department of Electrical and Computer Engineering, Illinois Institute of Technology, 10 W 35th St, Chicago, IL 60616, USA
| | - Mehmet Kocak
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, 1653 W. Congress Pkwy, Chicago, IL 60612, USA
| | - Mark Supanich
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, 1653 W. Congress Pkwy, Chicago, IL 60612, USA
| | - Jie Deng
- Department of Diagnostic Radiology and Nuclear Medicine, Rush University Medical Center, 1653 W. Congress Pkwy, Chicago, IL 60612, USA.
| |
Collapse
|
65
|
Park HJ, Park B, Lee SS. Radiomics and Deep Learning: Hepatic Applications. Korean J Radiol 2020; 21:387-401. [PMID: 32193887 PMCID: PMC7082656 DOI: 10.3348/kjr.2019.0752] [Citation(s) in RCA: 93] [Impact Index Per Article: 18.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 01/05/2020] [Indexed: 12/12/2022] Open
Abstract
Radiomics and deep learning have recently gained attention in the imaging assessment of various liver diseases. Recent research has demonstrated the potential utility of radiomics and deep learning in staging liver fibroses, detecting portal hypertension, characterizing focal hepatic lesions, prognosticating malignant hepatic tumors, and segmenting the liver and liver tumors. In this review, we outline the basic technical aspects of radiomics and deep learning and summarize recent investigations of the application of these techniques in liver disease.
Collapse
Affiliation(s)
- Hyo Jung Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Bumwoo Park
- Health Innovation Big Data Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul, Korea
| | - Seung Soo Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.
| |
Collapse
|
66
|
Kojima S. [2.Programing for Magnetic Resonance Imaging]. Nihon Hoshasen Gijutsu Gakkai Zasshi 2020; 76:613-619. [PMID: 32565520 DOI: 10.6009/jjrt.2020_jsrt_76.6.613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Affiliation(s)
- Shinya Kojima
- Department of Radiology, Tokyo Women's Medical University Medical Center East
| |
Collapse
|