101
|
Tong Q, Gong T, He H, Wang Z, Yu W, Zhang J, Zhai L, Cui H, Meng X, Tax CWM, Zhong J. A deep learning-based method for improving reliability of multicenter diffusion kurtosis imaging with varied acquisition protocols. Magn Reson Imaging 2020; 73:31-44. [PMID: 32822818 DOI: 10.1016/j.mri.2020.08.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 07/13/2020] [Accepted: 08/14/2020] [Indexed: 01/02/2023]
Abstract
Multicenter magnetic resonance imaging is gaining more popularity in large-sample projects. Since both varying hardware and software across different centers cause unavoidable data heterogeneity across centers, its impact on reliability in study outcomes has also drawn much attention recently. One fundamental issue arises in how to derive model parameters reliably from image data of varying quality. This issue is even more challenging for advanced diffusion methods such as diffusion kurtosis imaging (DKI). Recently, deep learning-based methods have been demonstrated with their potential for robust and efficient computation of diffusion-derived measures. Inspired by these approaches, the current study specifically designed a framework based on a three-dimensional hierarchical convolutional neural network, to jointly reconstruct and harmonize DKI measures from multicenter acquisition to reformulate these to a state-of-the-art hardware using data from traveling subjects. The results from the harmonized data acquired with different protocols show that: 1) the inter-scanner variation of DKI measures within white matter was reduced by 51.5% in mean kurtosis, 65.9% in axial kurtosis, 53.7% in radial kurtosis, and 61.5% in kurtosis fractional anisotropy, respectively; 2) data reliability of each single scanner was enhanced and brought to the level of the reference scanner; and 3) the harmonization network was able to reconstruct reliable DKI values from high data variability. Overall the results demonstrate the feasibility of the proposed deep learning-based method for DKI harmonization and help to simplify the protocol setup procedure for multicenter scanners with different hardware and software configurations.
Collapse
Affiliation(s)
- Qiqi Tong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China; Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, Zhejiang, China.
| | - Ting Gong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Hongjian He
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Zheng Wang
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, State Key Laboratory of Neuroscience, CAS Key Laboratory of Primate Neurobiology, Chinese Academy of Sciences, Shanghai, China.
| | - Wenwen Yu
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, State Key Laboratory of Neuroscience, CAS Key Laboratory of Primate Neurobiology, Chinese Academy of Sciences, Shanghai, China.
| | - Jianjun Zhang
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Lihao Zhai
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Hongsheng Cui
- Department of Radiology, The Third Affiliated Hospital of Qiqihar Medical University, Qiqihar, Heilongjiang, China
| | - Xin Meng
- Department of Radiology, The Third Affiliated Hospital of Qiqihar Medical University, Qiqihar, Heilongjiang, China
| | - Chantal W M Tax
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Physics and Astronomy, Cardiff University, Cardiff, United Kingdom.
| | - Jianhui Zhong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China; Department of Imaging Sciences, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
102
|
Jung S, Lee H, Ryu K, Song JE, Park M, Moon WJ, Kim DH. Artificial neural network for multi-echo gradient echo-based myelin water fraction estimation. Magn Reson Med 2020; 85:380-389. [PMID: 32686208 DOI: 10.1002/mrm.28407] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 06/09/2020] [Accepted: 06/09/2020] [Indexed: 02/01/2023]
Abstract
PURPOSE To demonstrate robust myelin water fraction (MWF) mapping using an artificial neural network (ANN) with multi-echo gradient-echo (GRE) signal. METHODS Multi-echo gradient-echo signals simulated with a three-pool exponential model were used to generate the training data set for the ANN, which was designed to yield the MWF. We investigated the performance of our proposed ANN for various conditions using both numerical simulations and in vivo data. Simulations were conducted with various SNRs to investigate the performance of the ANN. In vivo data with high spatial resolutions were applied in the analyses, and results were compared with MWFs derived by the nonlinear least-squares algorithm using a complex three-pool exponential model. RESULTS The network results for the simulations show high accuracies against noise compared with nonlinear least-squares MWFs: RMS-error value of 5.46 for the nonlinear least-squares MWF and 3.56 for the ANN MWF at an SNR of 150 (relative gain = 34.80%). These effects were also found in the in vivo data, with reduced SDs in the region-of-interest analyses. These effects of the ANN demonstrate the feasibility of acquiring high-resolution myelin water images. CONCLUSION The simulation results and in vivo data suggest that the ANN facilitates more robust MWF mapping in multi-echo gradient-echo sequences compared with the conventional nonlinear least-squares method.
Collapse
Affiliation(s)
- Soozy Jung
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Hongpyo Lee
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Kanghyun Ryu
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Jae Eun Song
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Mina Park
- Department of Radiology, Gangnam Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Won-Jin Moon
- Department of Radiology, Konkuk University Medical Center, Konkuk University School of Medicine, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
103
|
Werth K, Ledbetter L. Artificial Intelligence in Head and Neck Imaging: A Glimpse into the Future. Neuroimaging Clin N Am 2020; 30:359-368. [PMID: 32600636 DOI: 10.1016/j.nic.2020.04.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Artificial intelligence, specifically machine learning and deep learning, is a rapidly developing field in imaging sciences with the potential to improve the efficiency and effectiveness of radiologists. This review covers common technical terms and basic concepts in imaging artificial intelligence and briefly reviews the application of these techniques to general imaging as well as head and neck imaging. Artificial intelligence has the potential to contribute improvements to all areas of patient care, including image acquisition, processing, segmentation, automated detection of findings, integration of clinical information, quality improvement, and research. Numerous challenges remain, however, before widespread imaging clinical adoption and integration occur.
Collapse
Affiliation(s)
- Kyle Werth
- Department of Radiology, University of Kansas Medical Center, 3901 Rainbow Boulevard, Mailstop 4032, Kansas City, KS 66160, USA
| | - Luke Ledbetter
- Department of Radiology, David Geffen School of Medicine at UCLA, 757 Westwood Plaza, Suite 1621D, Los Angeles, CA 90095, USA.
| |
Collapse
|
104
|
Moeller S, Pisharady Kumar P, Andersson J, Akcakaya M, Harel N, Ma RE, Wu X, Yacoub E, Lenglet C, Ugurbil K. Diffusion Imaging in the Post HCP Era. J Magn Reson Imaging 2020; 54:36-57. [PMID: 32562456 DOI: 10.1002/jmri.27247] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 05/11/2020] [Accepted: 05/13/2020] [Indexed: 02/06/2023] Open
Abstract
Diffusion imaging is a critical component in the pursuit of developing a better understanding of the human brain. Recent technical advances promise enabling the advancement in the quality of data that can be obtained. In this review the context for different approaches relative to the Human Connectome Project are compared. Significant new gains are anticipated from the use of high-performance head gradients. These gains can be particularly large when the high-performance gradients are employed together with ultrahigh magnetic fields. Transmit array designs are critical in realizing high accelerations in diffusion-weighted (d)MRI acquisitions, while maintaining large field of view (FOV) coverage, and several techniques for optimal signal-encoding are now available. Reconstruction and processing pipelines that precisely disentangle the acquired neuroanatomical information are established and provide the foundation for the application of deep learning in the advancement of dMRI for complex tissues. Level of Evidence: 3 Technical Efficacy Stage: Stage 3.
Collapse
Affiliation(s)
- Steen Moeller
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Pramod Pisharady Kumar
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Jesper Andersson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Mehmet Akcakaya
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA.,Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota, USA
| | - Noam Harel
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Ruoyun Emily Ma
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Xiaoping Wu
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Essa Yacoub
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Christophe Lenglet
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| |
Collapse
|
105
|
Mani MP, Aggarwal HK, Ghosh S, Jacob M. Model-Based Deep Learning for Reconstruction of Joint k-q Under-sampled High Resolution Diffusion MRI. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2020; 2020:913-916. [PMID: 33574989 DOI: 10.1109/isbi45749.2020.9098593] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
We propose a model-based deep learning architecture for the reconstruction of highly accelerated diffusion magnetic resonance imaging (MRI) that enables high resolution imaging. The proposed reconstruction jointly recovers all the diffusion weighted images in a single step from a joint k-q under-sampled acquisition in a parallel MRI setting. We propose the novel use of a pre-trained denoiser as a regularizer in a model-based reconstruction for the recovery of highly under-sampled data. Specifically, we designed the denoiser based on a general diffusion MRI tissue microstructure model for multi-compartmental modeling. By using a wide range of biologically plausible parameter values for the multi-compartmental microstructure model, we simulated diffusion signal that spans the entire microstructure parameter space. A neural network was trained in an unsupervised manner using an autoencoder to learn the diffusion MRI signal subspace. We employed the autoencoder in a model-based reconstruction and show that the autoencoder provides a strong denoising prior to recover the q-space signal. We show reconstruction results on a simulated brain dataset that shows high acceleration capabilities of the proposed method.
Collapse
|
106
|
Gong T, Tong Q, He H, Sun Y, Zhong J, Zhang H. MTE-NODDI: Multi-TE NODDI for disentangling non-T2-weighted signal fractions from compartment-specific T2 relaxation times. Neuroimage 2020; 217:116906. [PMID: 32387626 DOI: 10.1016/j.neuroimage.2020.116906] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Revised: 05/01/2020] [Accepted: 05/03/2020] [Indexed: 12/28/2022] Open
Abstract
Neurite orientation dispersion and density imaging (NODDI) has become a popular diffusion MRI technique for investigating microstructural alternations during brain development, maturation and aging in health and disease. However, the NODDI model of diffusion does not explicitly account for compartment-specific T2 relaxation and its model parameters are usually estimated from data acquired with a single echo time (TE). Thus, the NODDI-derived measures, such as the intra-neurite signal fraction, also known as the neurite density index, could be T2-weighted and TE-dependent. This may confound the interpretation of studies as one cannot disentangle differences in diffusion from those in T2 relaxation. To address this challenge, we propose a multi-TE NODDI (MTE-NODDI) technique, inspired by recent studies exploiting the synergy between diffusion and T2 relaxation. MTE-NODDI could give robust estimates of the non-T2-weighted signal fractions and compartment-specific T2 values, as demonstrated by both simulation and in vivo data experiments. Results showed that the estimated non-T2 weighted intra-neurite fraction and compartment-specific T2 values in white matter were consistent with previous studies. The T2-weighted intra-neurite fractions from the original NODDI were found to be overestimated compared to their non-T2-weighted estimates; the overestimation increases with TE, consistent with the reported intra-neurite T2 being larger than extra-neurite T2. Finally, the inclusion of the free water compartment reduces the estimation error in intra-neurite T2 in the presence of cerebrospinal fluid contamination. With the ability to disentangle non-T2-weighted signal fractions from compartment-specific T2 relaxation, MTE-NODDI could help improve the interpretability of future neuroimaging studies, especially those in brain development, maturation and aging.
Collapse
Affiliation(s)
- Ting Gong
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China; Department of Computer Science & Centre for Medical Image Computing, University College London, UK
| | - Qiqi Tong
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China
| | - Hongjian He
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China.
| | - Yi Sun
- MR Collaboration, Siemens Healthcare, Shanghai, China
| | - Jianhui Zhong
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China; Department of Imaging Sciences, University of Rochester, Rochester, NY, United States.
| | - Hui Zhang
- Department of Computer Science & Centre for Medical Image Computing, University College London, UK
| |
Collapse
|
107
|
Pinto MS, Paolella R, Billiet T, Van Dyck P, Guns PJ, Jeurissen B, Ribbens A, den Dekker AJ, Sijbers J. Harmonization of Brain Diffusion MRI: Concepts and Methods. Front Neurosci 2020; 14:396. [PMID: 32435181 PMCID: PMC7218137 DOI: 10.3389/fnins.2020.00396] [Citation(s) in RCA: 78] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2019] [Accepted: 03/30/2020] [Indexed: 11/13/2022] Open
Abstract
MRI diffusion data suffers from significant inter- and intra-site variability, which hinders multi-site and/or longitudinal diffusion studies. This variability may arise from a range of factors, such as hardware, reconstruction algorithms and acquisition settings. To allow a reliable comparison and joint analysis of diffusion data across sites and over time, there is a clear need for robust data harmonization methods. This review article provides a comprehensive overview of diffusion data harmonization concepts and methods, and their limitations. Overall, the methods for the harmonization of multi-site diffusion images can be categorized in two main groups: diffusion parametric map harmonization (DPMH) and diffusion weighted image harmonization (DWIH). Whereas DPMH harmonizes the diffusion parametric maps (e.g., FA, MD, and MK), DWIH harmonizes the diffusion-weighted images. Defining a gold standard harmonization technique for dMRI data is still an ongoing challenge. Nevertheless, in this paper we provide two classification tools, namely a feature table and a flowchart, which aim to guide the readers in selecting an appropriate harmonization method for their study.
Collapse
Affiliation(s)
- Maíra Siqueira Pinto
- Department of Radiology, Antwerp University Hospital, University of Antwerp, Antwerp, Belgium.,imec-Vision Lab, University of Antwerp, Antwerp, Belgium
| | - Roberto Paolella
- imec-Vision Lab, University of Antwerp, Antwerp, Belgium.,Icometrix, Leuven, Belgium
| | | | - Pieter Van Dyck
- Department of Radiology, Antwerp University Hospital, University of Antwerp, Antwerp, Belgium
| | | | - Ben Jeurissen
- imec-Vision Lab, University of Antwerp, Antwerp, Belgium
| | | | | | - Jan Sijbers
- imec-Vision Lab, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
108
|
A review of deep learning with special emphasis on architectures, applications and recent trends. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.105596] [Citation(s) in RCA: 121] [Impact Index Per Article: 24.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
109
|
Artificial Intelligence Pertaining to Cardiothoracic Imaging and Patient Care: Beyond Image Interpretation. J Thorac Imaging 2020; 35:137-142. [PMID: 32141963 DOI: 10.1097/rti.0000000000000486] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Artificial intelligence (AI) is a broad field of computational science that includes many subsets. Today the most widely used subset in medical imaging is machine learning (ML). Many articles have focused on the use of ML for pattern recognition to detect and potentially diagnose various pathologies. However, AI algorithm development is now directed toward workflow management. AI can impact patient care at multiple stages of their imaging experience and assist in efficient and effective scheduling, imaging performance, worklist prioritization, image interpretation, and quality assurance. The purpose of this manuscript was to review the potential AI applications in radiology focusing on workflow management and discuss how ML will affect cardiothoracic imaging.
Collapse
|
110
|
Ye C, Li Y, Zeng X. An improved deep network for tissue microstructure estimation with uncertainty quantification. Med Image Anal 2020; 61:101650. [PMID: 32007700 DOI: 10.1016/j.media.2020.101650] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2019] [Revised: 11/26/2019] [Accepted: 01/16/2020] [Indexed: 02/06/2023]
Abstract
Deep learning based methods have improved the estimation of tissue microstructure from diffusion magnetic resonance imaging (dMRI) scans acquired with a reduced number of diffusion gradients. These methods learn the mapping from diffusion signals in a voxel or patch to tissue microstructure measures. In particular, it is beneficial to exploit the sparsity of diffusion signals jointly in the spatial and angular domains, and the deep network can be designed by unfolding iterative processes that adaptively incorporate historical information for sparse reconstruction. However, the number of network parameters is huge in such a network design, which could increase the difficulty of network training and limit the estimation performance. In addition, existing deep learning based approaches to tissue microstructure estimation do not provide the important information about the uncertainty of estimates. In this work, we continue the exploration of tissue microstructure estimation using a deep network and seek to address these limitations. First, we explore the sparse representation of diffusion signals with a separable spatial-angular dictionary and design an improved deep network for tissue microstructure estimation. The procedure for updating the sparse code associated with the separable dictionary is derived and unfolded to construct the deep network. Second, with the formulation of sparse representation of diffusion signals, we propose to quantify the uncertainty of network outputs with a residual bootstrap strategy. Specifically, because of the sparsity constraint in the signal representation, we perform a Lasso bootstrap strategy for uncertainty quantification. Experiments were performed on brain dMRI scans with a reduced number of diffusion gradients, where the proposed method was applied to two representative biophysical models for describing tissue microstructure and compared with state-of-the-art methods of tissue microstructure estimation. The results show that our approach compares favorably with the competing methods in terms of estimation accuracy. In addition, the uncertainty measures provided by our method correlate with estimation errors and produce reasonable confidence intervals; these results suggest potential application of the proposed uncertainty quantification method in brain studies.
Collapse
Affiliation(s)
- Chuyang Ye
- School of Information and Electronics, Beijing Institute of Technology, Room 316, Building 4, 5 Zhongguancun South Street, Beijing 100081, China.
| | - Yuxing Li
- School of Information and Electronics, Beijing Institute of Technology, Room 316, Building 4, 5 Zhongguancun South Street, Beijing 100081, China
| | - Xiangzhu Zeng
- Department of Radiology, Peking University Third Hospital, Beijing, China
| |
Collapse
|
111
|
Cheon W, Kim SJ, Kim K, Lee M, Lee J, Jo K, Cho S, Cho H, Han Y. Feasibility of two-dimensional dose distribution deconvolution using convolution neural networks. Med Phys 2019; 46:5833-5847. [PMID: 31621917 DOI: 10.1002/mp.13869] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Revised: 10/01/2019] [Accepted: 10/10/2019] [Indexed: 11/05/2022] Open
Abstract
PURPOSE The purpose of this study was to investigate the feasibility of two-dimensional (2D) dose distribution deconvolution using convolutional neural networks (CNNs) instead of an analytical approach for an in-house scintillation detector that has a detector-interface artifact in the penumbra region. METHODS Datasets of 2D dose distributions were acquired from a medical linear accelerator of Novalis Tx. The datasets comprise two different sizes of square radiation fields and 13 clinical intensity-modulated radiation treatment (IMRT) plans. These datasets were divided into two datasets (training and test) to train and validate the developed network, called PenumbraNet, which is a shallow linear CNN. The PenumbraNet was trained to transform the measured dose distribution [M(x, y)] to calculated distribution [D(x, y)] by the treatment planning system. After training of the PenumbraNet was completed, the performance was evaluated using test data, which were 10 × 10 cm2 open field and ten clinical IMRT cases. The corrected dose distribution [C(x, y)] was evaluated against D(x, y) with 2%/2 mm and 3%/3 mm criteria of the gamma index for each field. The M(x, y) and deconvolved dose distribution with the analytically obtained kernel using Wiener filtering [A(x, y)] were also evaluated for comparison. In addition, we compared the performance of the shallow depth of linear PenumbraNet with that of nonlinear PenumbraNet and a deep nonlinear PenumbraNet within the same training epoch. RESULTS The mean gamma passing rates were 84.77% and 95.81% with 3%/3 mm gamma criteria for A(x, y) and C(x, y) of the PenumbraNet, respectively. The mean gamma pass rates of nonlinear PenumbraNet and the deep depth of nonlinear PenumbraNet were 96.62%, 93.42% with 3%/3 mm gamma criteria, respectively. CONCLUSIONS We demonstrated the feasibility of the PenumbraNets for 2D dose distribution deconvolution. The nonlinear PenumbraNet which has the best performance improved the gamma passing rate by 11.85% from the M(x, y) at 3%/3 mm gamma criteria.
Collapse
Affiliation(s)
- Wonjoong Cheon
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul, 06351, Korea
| | - Sung Jin Kim
- Department of Radiation Oncology, Samsung Medical Center, Seoul, 06351, Korea
| | - Kyuseok Kim
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, 26493, Korea
| | - Moonhee Lee
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul, 06351, Korea
| | - Jinhyeop Lee
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul, 06351, Korea
| | - Kwanghyun Jo
- Department of Radiation Oncology, Samsung Medical Center, Seoul, 06351, Korea
| | - Sungkoo Cho
- Department of Radiation Oncology, Samsung Medical Center, Seoul, 06351, Korea
| | - Hyosung Cho
- Department of Radiation Convergence Engineering, Yonsei University, Wonju, 26493, Korea
| | - Youngyih Han
- Department of Health Sciences and Technology, SAIHST, Sungkyunkwan University, Seoul, 06351, Korea.,Department of Radiation Oncology, School of Medicine, Samsung Medical Center, Sungkyunkwan University, Seoul, 06351, Korea
| |
Collapse
|
112
|
|
113
|
Chen G, Dong B, Zhang Y, Lin W, Shen D, Yap PT. XQ-SR: Joint x-q space super-resolution with application to infant diffusion MRI. Med Image Anal 2019; 57:44-55. [PMID: 31279215 PMCID: PMC6764426 DOI: 10.1016/j.media.2019.06.010] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Revised: 06/16/2019] [Accepted: 06/20/2019] [Indexed: 12/30/2022]
Abstract
Diffusion MRI (DMRI) is a powerful tool for studying early brain development and disorders. However, the typically low spatio-angular resolution of DMRI diminishes structural details and limits quantitative analysis to simple diffusion models. This problem is aggravated for infant DMRI since (i) the infant brain is significantly smaller than that of an adult, demanding higher spatial resolution to capture subtle structures; and (ii) the typically limited scan time of unsedated infants poses significant challenges to DMRI acquisition with high spatio-angular resolution. Post-acquisition super-resolution (SR) is an important alternative for increasing the resolution of DMRI data without prolonging acquisition times. However, most existing methods focus on the SR of only either the spatial domain (x-space) or the diffusion wavevector domain (q-space). For more effective resolution enhancement, we propose a framework for joint SR in both spatial and wavevector domains. More specifically, we first establish the signal relationships in x-q space using a robust neighborhood matching technique. We then harness the signal relationships to regularize the ill-posed inverse problem associated with the recovery of high-resolution data from their low-resolution counterpart. Extensive experiments on synthetic, adult, and infant DMRI data demonstrate that our method is able to recover high-resolution DMRI data with remarkably improved quality.
Collapse
Affiliation(s)
- Geng Chen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA.
| | - Bin Dong
- Beijing International Center for Mathematical Research, Peking University, Beijing, China
| | - Yong Zhang
- Vancouver Research Center, Huawei, Burnaby, Canada
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA; Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, NC, USA.
| |
Collapse
|
114
|
Zhu G, Jiang B, Tong L, Xie Y, Zaharchuk G, Wintermark M. Applications of Deep Learning to Neuro-Imaging Techniques. Front Neurol 2019; 10:869. [PMID: 31474928 PMCID: PMC6702308 DOI: 10.3389/fneur.2019.00869] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 07/26/2019] [Indexed: 12/12/2022] Open
Abstract
Many clinical applications based on deep learning and pertaining to radiology have been proposed and studied in radiology for classification, risk assessment, segmentation tasks, diagnosis, prognosis, and even prediction of therapy responses. There are many other innovative applications of AI in various technical aspects of medical imaging, particularly applied to the acquisition of images, ranging from removing image artifacts, normalizing/harmonizing images, improving image quality, lowering radiation and contrast dose, and shortening the duration of imaging studies. This article will address this topic and will seek to present an overview of deep learning applied to neuroimaging techniques.
Collapse
Affiliation(s)
| | | | | | | | | | - Max Wintermark
- Neuroradiology Section, Department of Radiology, Stanford Healthcare, Stanford, CA, United States
| |
Collapse
|
115
|
Cuocolo R, Cipullo MB, Stanzione A, Ugga L, Romeo V, Radice L, Brunetti A, Imbriaco M. Machine learning applications in prostate cancer magnetic resonance imaging. Eur Radiol Exp 2019; 3:35. [PMID: 31392526 PMCID: PMC6686027 DOI: 10.1186/s41747-019-0109-2] [Citation(s) in RCA: 98] [Impact Index Per Article: 16.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Accepted: 07/02/2019] [Indexed: 12/17/2022] Open
Abstract
With this review, we aimed to provide a synopsis of recently proposed applications of machine learning (ML) in radiology focusing on prostate magnetic resonance imaging (MRI). After defining the difference between ML and classical rule-based algorithms and the distinction among supervised, unsupervised and reinforcement learning, we explain the characteristic of deep learning (DL), a particular new type of ML, including its structure mimicking human neural networks and its 'black box' nature. Differences in the pipeline for applying ML and DL to prostate MRI are highlighted. The following potential clinical applications in different settings are outlined, many of them based only on MRI-unenhanced sequences: gland segmentation; assessment of lesion aggressiveness to distinguish between clinically significant and indolent cancers, allowing for active surveillance; cancer detection/diagnosis and localisation (transition versus peripheral zone, use of prostate imaging reporting and data system (PI-RADS) version 2), reading reproducibility, differentiation of cancers from prostatitis benign hyperplasia; local staging and pre-treatment assessment (detection of extraprostatic disease extension, planning of radiation therapy); and prediction of biochemical recurrence. Results are promising, but clinical applicability still requires more robust validation across scanner vendors, field strengths and institutions.
Collapse
Affiliation(s)
- Renato Cuocolo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Maria Brunella Cipullo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Arnaldo Stanzione
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy.
| | - Lorenzo Ugga
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Valeria Romeo
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Leonardo Radice
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Arturo Brunetti
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| | - Massimo Imbriaco
- Department of Advanced Biomedical Sciences, University of Naples "Federico II", Via S. Pansini, 5, 80131, Naples, Italy
| |
Collapse
|
116
|
Bollmann S, Rasmussen KGB, Kristensen M, Blendal RG, Østergaard LR, Plocharski M, O'Brien K, Langkammer C, Janke A, Barth M. DeepQSM - using deep learning to solve the dipole inversion for quantitative susceptibility mapping. Neuroimage 2019; 195:373-383. [PMID: 30935908 DOI: 10.1016/j.neuroimage.2019.03.060] [Citation(s) in RCA: 69] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 03/03/2019] [Accepted: 03/26/2019] [Indexed: 12/21/2022] Open
Abstract
Quantitative susceptibility mapping (QSM) is based on magnetic resonance imaging (MRI) phase measurements and has gained broad interest because it yields relevant information on biological tissue properties, predominantly myelin, iron and calcium in vivo. Thereby, QSM can also reveal pathological changes of these key components in widespread diseases such as Parkinson's disease, Multiple Sclerosis, or hepatic iron overload. While the ill-posed field-to-source-inversion problem underlying QSM is conventionally assessed by the means of regularization techniques, we trained a fully convolutional deep neural network - DeepQSM - to directly invert the magnetic dipole kernel convolution. DeepQSM learned the physical forward problem using purely synthetic data and is capable of solving the ill-posed field-to-source inversion on in vivo MRI phase data. The magnetic susceptibility maps reconstructed by DeepQSM enable identification of deep brain substructures and provide information on their respective magnetic tissue properties. In summary, DeepQSM can invert the magnetic dipole kernel convolution and delivers robust solutions to this ill-posed problem.
Collapse
Affiliation(s)
- Steffen Bollmann
- Centre for Advanced Imaging, The University of Queensland, Building 57 of University Dr, St Lucia, QLD, 4072, Brisbane, Australia.
| | | | - Mads Kristensen
- Department of Health Science and Technology, Aalborg University, Fredrik Bajers Vej 7, 9000, Aalborg, Denmark
| | - Rasmus Guldhammer Blendal
- Department of Health Science and Technology, Aalborg University, Fredrik Bajers Vej 7, 9000, Aalborg, Denmark
| | - Lasse Riis Østergaard
- Department of Health Science and Technology, Aalborg University, Fredrik Bajers Vej 7, 9000, Aalborg, Denmark
| | - Maciej Plocharski
- Department of Health Science and Technology, Aalborg University, Fredrik Bajers Vej 7, 9000, Aalborg, Denmark
| | - Kieran O'Brien
- Centre for Advanced Imaging, The University of Queensland, Building 57 of University Dr, St Lucia, QLD, 4072, Brisbane, Australia; Siemens Healthcare Pty Ltd, Brisbane, Australia
| | - Christian Langkammer
- Department of Neurology, Medical University of Graz, Auenbruggerplatz 22, 8036, Graz, Austria
| | - Andrew Janke
- Centre for Advanced Imaging, The University of Queensland, Building 57 of University Dr, St Lucia, QLD, 4072, Brisbane, Australia
| | - Markus Barth
- Centre for Advanced Imaging, The University of Queensland, Building 57 of University Dr, St Lucia, QLD, 4072, Brisbane, Australia
| |
Collapse
|
117
|
Lin Z, Gong T, Wang K, Li Z, He H, Tong Q, Yu F, Zhong J. Fast learning of fiber orientation distribution function for MR tractography using convolutional neural network. Med Phys 2019; 46:3101-3116. [PMID: 31009085 DOI: 10.1002/mp.13555] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 04/07/2019] [Accepted: 04/14/2019] [Indexed: 12/13/2022] Open
Abstract
PURPOSE In diffusion-weighted magnetic resonance imaging (DW-MRI), the fiber orientation distribution function (fODF) is of great importance for solving complex fiber configurations to achieve reliable tractography throughout the brain, which ultimately facilitates the understanding of brain connectivity and exploration of neurological dysfunction. Recently, multi-shell multi-tissue constrained spherical deconvolution (MSMT-CSD) method has been explored for reconstructing full fODFs. To achieve a reliable fitting, similar to other model-based approaches, a large number of diffusion measurements is typically required for MSMT-CSD method. The prolonged acquisition is, however, not feasible in practical clinical routine and is prone to motion artifacts. To accelerate the acquisition, we proposed a method to reconstruct the fODF from downsampled diffusion-weighted images (DWIs) by leveraging the strong inference ability of the deep convolutional neural network (CNN). METHODS The method treats spherical harmonics (SH)-represented DWI signals and fODF coefficients as inputs and outputs, respectively. To compensate for the reduced gradient directions with reduced number of DWIs in acquisition in each voxel, its surrounding voxels are incorporated by the network for exploiting their spatial continuity. The resulting fODF coefficients are fitted with applying the CNN in a multi-target regression model. The network is composed of two convolutional layers and three fully connected layers. To obtain an initial evaluation of the method, we quantitatively measured its performance on a simulated dataset. Then, for in vivo tests, we employed data from 24 subjects from the Human Connectome Project (HCP) as training set and six subjects as test set. The performance of the proposed method was primarily compared to the super-resolved MSMT-CSD with the decreasing number of DWIs. The fODFs reconstructed by MSMT-CSD from all available 288 DWIs were used as training labels and the reference standard. The performance was quantitatively measured by the angular correlation coefficient (ACC) and the mean angular error (MAE). RESULTS For the simulated dataset, the proposed method exhibited the potential advantage over the model reconstruction. For the in vivo dataset, it achieved superior results over the MSMT-CSD in all the investigated cases, with its advantage more obvious when a limited number of DWIs were used. As the number of DWIs was reduced from 95 to 25, the median ACC ranged from 0.96 to 0.91 for the CNN, but 0.93 to 0.77 for the MSMT-CSD (with perfect score of 1). The angular error in the typical regions of interest (ROIs) was also much lower, especially in multi-fiber regions. The average MAE for the CNN method in regions containing one, two, three fibers was, respectively, 1.09°, 2.75°, and 8.35° smaller than the MSMT-CSD method. The visual inception of the fODF further confirmed this superiority. Moreover, the tractography results validated the effectiveness of the learned fODF, in preserving known major branching fibers with only 25 DWIs. CONCLUSION Experiments on HCP datasets demonstrated the feasibility of the proposed method in recovering fODFs from up to 11-fold reduced number of DWIs. The proposed method offers a new streamlined reconstruction procedure and exhibits promising potential in acquisition acceleration for the reconstruction of fODFs with good accuracy.
Collapse
Affiliation(s)
- Zhichao Lin
- Department of Instrument Science & Technology, Zhejiang University, Hangzhou, 310027, China
| | - Ting Gong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Kewen Wang
- College of Natural Science, Computer Science, The University of Texas at Austin, Austin, TX, USA
| | - Zhiwei Li
- Department of Instrument Science & Technology, Zhejiang University, Hangzhou, 310027, China
| | - Hongjian He
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Qiqi Tong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China
| | - Feng Yu
- Department of Instrument Science & Technology, Zhejiang University, Hangzhou, 310027, China
| | - Jianhui Zhong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China.,University of Rochester, Rochester, NY, USA
| |
Collapse
|
118
|
Efficient Deep Network Architectures for Fast Chest X-Ray Tuberculosis Screening and Visualization. Sci Rep 2019; 9:6268. [PMID: 31000728 PMCID: PMC6472370 DOI: 10.1038/s41598-019-42557-4] [Citation(s) in RCA: 117] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Accepted: 03/22/2019] [Indexed: 01/23/2023] Open
Abstract
Automated diagnosis of tuberculosis (TB) from chest X-Rays (CXR) has been tackled with either hand-crafted algorithms or machine learning approaches such as support vector machines (SVMs) and convolutional neural networks (CNNs). Most deep neural network applied to the task of tuberculosis diagnosis have been adapted from natural image classification. These models have a large number of parameters as well as high hardware requirements, which makes them prone to overfitting and harder to deploy in mobile settings. We propose a simple convolutional neural network optimized for the problem which is faster and more efficient than previous models but preserves their accuracy. Moreover, the visualization capabilities of CNNs have not been fully investigated. We test saliency maps and grad-CAMs as tuberculosis visualization methods, and discuss them from a radiological perspective.
Collapse
|
119
|
Ye C, Li X, Chen J. A deep network for tissue microstructure estimation using modified LSTM units. Med Image Anal 2019; 55:49-64. [PMID: 31022640 DOI: 10.1016/j.media.2019.04.006] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2018] [Revised: 03/15/2019] [Accepted: 04/17/2019] [Indexed: 11/18/2022]
Abstract
Diffusion magnetic resonance imaging (dMRI) offers a unique tool for noninvasively assessing tissue microstructure. However, accurate estimation of tissue microstructure described by complicated signal models can be challenging when a reduced number of diffusion gradients are used. Deep learning based microstructure estimation has recently been developed and achieved promising results. In particular, optimization-based learning, where deep network structures are constructed by unfolding the iterative processes performed for solving optimization problems, has demonstrated great potential in accurate microstructure estimation with a reduced number of diffusion gradients. In this work, using the optimization-based learning strategy, we propose a deep network structure that is motivated by the use of historical information in iterative optimization for tissue microstructure estimation, and such incorporation of historical information has not been previously explored in the design of deep networks for microstructure estimation. We assume that (1) diffusion signals can be sparsely represented by a dictionary and its coefficients jointly in the spatial and angular domain, and (2) tissue microstructure can be computed from the sparse representation. Following these assumptions, our network comprises two cascaded stages. The first stage takes image patches as input and computes the spatial-angular sparse representation of the input with learned weights. Specifically, the network structure in the first stage is constructed by unfolding an iterative process for solving sparse reconstruction problems, where historical information is incorporated. The components in this network can be shown to correspond to modified long short-term memory (LSTM) units. In the second stage, fully connected layers are added to compute the mapping from the sparse representation to tissue microstructure. The weights in the two stages are learned jointly by minimizing the mean squared error of microstructure estimation. Experiments were performed on dMRI scans with a reduced number of diffusion gradients. For demonstration, we evaluated the estimation of tissue microstructure described by three signal models: the neurite orientation dispersion and density imaging (NODDI) model, the spherical mean technique (SMT) model, and the ensemble average propagator (EAP) model. The results indicate that the proposed approach outperforms competing methods.
Collapse
Affiliation(s)
- Chuyang Ye
- School of Information and Electronics, Beijing Institute of Technology, Beijing, China.
| | - Xiuli Li
- Deepwise AI Lab, Beijing, China; Peng Cheng Laboratory, Shenzhen, China
| | - Jingnan Chen
- School of Economics and Management, Beihang University, Beijing, 37 Xueyuan Road, 100191, China.
| |
Collapse
|
120
|
Glaser JI, Benjamin AS, Farhoodi R, Kording KP. The roles of supervised machine learning in systems neuroscience. Prog Neurobiol 2019; 175:126-137. [PMID: 30738835 PMCID: PMC8454059 DOI: 10.1016/j.pneurobio.2019.01.008] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Revised: 01/23/2019] [Accepted: 01/28/2019] [Indexed: 01/18/2023]
Abstract
Over the last several years, the use of machine learning (ML) in neuroscience has been rapidly increasing. Here, we review ML's contributions, both realized and potential, across several areas of systems neuroscience. We describe four primary roles of ML within neuroscience: (1) creating solutions to engineering problems, (2) identifying predictive variables, (3) setting benchmarks for simple models of the brain, and (4) serving itself as a model for the brain. The breadth and ease of its applicability suggests that machine learning should be in the toolbox of most systems neuroscientists.
Collapse
Affiliation(s)
- Joshua I Glaser
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Ari S Benjamin
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Roozbeh Farhoodi
- Department of Bioengineering, University of Pennsylvania, United States.
| | - Konrad P Kording
- Department of Bioengineering, University of Pennsylvania, United States; Department of Neuroscience, University of Pennsylvania, United States; Canadian Institute for Advanced Research, Canada.
| |
Collapse
|
121
|
Ryu K, Nam Y, Gho SM, Jang J, Lee HJ, Cha J, Baek HJ, Park J, Kim DH. Data-driven synthetic MRI FLAIR artifact correction via deep neural network. J Magn Reson Imaging 2019; 50:1413-1423. [PMID: 30884007 DOI: 10.1002/jmri.26712] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 02/22/2019] [Accepted: 02/22/2019] [Indexed: 12/23/2022] Open
Abstract
BACKGROUND FLAIR (fluid attenuated inversion recovery) imaging via synthetic MRI methods leads to artifacts in the brain, which can cause diagnostic limitations. The main sources of the artifacts are attributed to the partial volume effect and flow, which are difficult to correct by analytical modeling. In this study, a deep learning (DL)-based synthetic FLAIR method was developed, which does not require analytical modeling of the signal. PURPOSE To correct artifacts in synthetic FLAIR using a DL method. STUDY TYPE Retrospective. SUBJECTS A total of 80 subjects with clinical indications (60.6 ± 16.7 years, 38 males, 42 females) were divided into three groups: a training set (56 subjects, 62.1 ± 14.8 years, 25 males, 31 females), a validation set (1 subject, 62 years, male), and the testing set (23 subjects, 57.3 ± 20.4 years, 13 males, 10 females). FIELD STRENGTH/SEQUENCE 3 T MRI using a multiple-dynamic multiple-echo acquisition (MDME) sequence for synthetic MRI and a conventional FLAIR sequence. ASSESSMENT Normalized root mean square (NRMSE) and structural similarity (SSIM) were computed for uncorrected synthetic FLAIR and DL-corrected FLAIR. In addition, three neuroradiologists scored the three FLAIR datasets blindly, evaluating image quality and artifacts for sulci/periventricular and intraventricular/cistern space regions. STATISTICAL TESTS Pairwise Student's t-tests and a Wilcoxon test were performed. RESULTS For quantitative assessment, NRMSE improved from 4.2% to 2.9% (P < 0.0001) and SSIM improved from 0.85 to 0.93 (P < 0.0001). Additionally, NRMSE values significantly improved from 1.58% to 1.26% (P < 0.001), 3.1% to 1.5% (P < 0.0001), and 2.7% to 1.4% (P < 0.0001) in white matter, gray matter, and cerebral spinal fluid (CSF) regions, respectively, when using DL-corrected FLAIR. For qualitative assessment, DL correction achieved improved overall quality, fewer artifacts in sulci and periventricular regions, and in intraventricular and cistern space regions. DATA CONCLUSION The DL approach provides a promising method to correct artifacts in synthetic FLAIR. LEVEL OF EVIDENCE 4 Technical Efficacy: Stage 1 J. Magn. Reson. Imaging 2019;50:1413-1423.
Collapse
Affiliation(s)
- Kanghyun Ryu
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Yoonho Nam
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, Catholic University of Korea, Seoul, Republic of Korea
| | - Sung-Min Gho
- MR Clinical research and Development, GE Healthcare, Seoul, Republic of Korea
| | - Jinhee Jang
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, Catholic University of Korea, Seoul, Republic of Korea
| | - Ho-Joon Lee
- Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea.,Department of Radiology, Inje University College of Medicine, Haeundae Paik Hospital, Busan, Republic of Korea
| | - Jihoon Cha
- Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Hye Jin Baek
- Department of Radiology, Gyeongsang National University School of Medicine and Gyeongsang National University Changwon Hospital, Changwon, Republic of Korea
| | - Jiyong Park
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| | - Dong-Hyun Kim
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Republic of Korea
| |
Collapse
|
122
|
Liu F, Feng L, Kijowski R. MANTIS: Model-Augmented Neural neTwork with Incoherent k-space Sampling for efficient MR parameter mapping. Magn Reson Med 2019; 82:174-188. [PMID: 30860285 DOI: 10.1002/mrm.27707] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2018] [Revised: 01/22/2019] [Accepted: 02/01/2019] [Indexed: 12/25/2022]
Abstract
PURPOSE To develop and evaluate a novel deep learning-based image reconstruction approach called MANTIS (Model-Augmented Neural neTwork with Incoherent k-space Sampling) for efficient MR parameter mapping. METHODS MANTIS combines end-to-end convolutional neural network (CNN) mapping, incoherent k-space undersampling, and a physical model as a synergistic framework. The CNN mapping directly converts a series of undersampled images straight into MR parameter maps using supervised training. Signal model fidelity is enforced by adding a pathway between the undersampled k-space and estimated parameter maps to ensure that the parameter maps produced synthesized k-space consistent with the acquired undersampling measurements. The MANTIS framework was evaluated on the T2 mapping of the knee at different acceleration rates and was compared with 2 other CNN mapping methods and conventional sparsity-based iterative reconstruction approaches. Global quantitative assessment and regional T2 analysis for the cartilage and meniscus were performed to demonstrate the reconstruction performance of MANTIS. RESULTS MANTIS achieved high-quality T2 mapping at both moderate (R = 5) and high (R = 8) acceleration rates. Compared to conventional reconstruction approaches that exploited image sparsity, MANTIS yielded lower errors (normalized root mean square error of 6.1% for R = 5 and 7.1% for R = 8) and higher similarity (structural similarity index of 86.2% at R = 5 and 82.1% at R = 8) to the reference in the T2 estimation. MANTIS also achieved superior performance compared to direct CNN mapping and a 2-step CNN method. CONCLUSION The MANTIS framework, with a combination of end-to-end CNN mapping, signal model-augmented data consistency, and incoherent k-space sampling, is a promising approach for efficient and robust estimation of quantitative MR parameters.
Collapse
Affiliation(s)
- Fang Liu
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| | - Li Feng
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York
| | - Richard Kijowski
- Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, Wisconsin
| |
Collapse
|
123
|
Aliotta E, Nourzadeh H, Sanders J, Muller D, Ennis DB. Highly accelerated, model-free diffusion tensor MRI reconstruction using neural networks. Med Phys 2019; 46:1581-1591. [PMID: 30677141 DOI: 10.1002/mp.13400] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2018] [Revised: 12/17/2018] [Accepted: 01/13/2019] [Indexed: 12/15/2022] Open
Abstract
PURPOSE The purpose of this study was to develop a neural network that accurately performs diffusion tensor imaging (DTI) reconstruction from highly accelerated scans. MATERIALS AND METHODS This retrospective study was conducted using data acquired between 2013 and 2018 and was approved by the local institutional review board. DTI acquired in healthy volunteers (N = 10) was used to train a neural network, DiffNet, to reconstruct fractional anisotropy (FA) and mean diffusivity (MD) maps from small subsets of acquired DTI data with between 3 and 20 diffusion-encoding directions. FA and MD maps were then reconstructed in volunteers and in patients with glioblastoma multiforme (GBM, N = 12) using both DiffNet and conventional reconstructions. Accuracy and precision were quantified in volunteer scans and compared between reconstructions. The accuracy of tumor delineation was compared between reconstructed patient data by evaluating agreement between DTI-derived tumor volumes and volumes defined by contrast-enhanced T1-weighted MRI. Comparisons were performed using areas under the receiver operating characteristic curves (AUC). RESULTS DiffNet FA reconstructions were more accurate and precise compared with conventional reconstructions for all acceleration factors. DiffNet permitted reconstruction with only three diffusion-encoding directions with significantly lower bias than the conventional method using six directions (0.01 ± 0.01 vs 0.06 ± 0.01, P < 0.001). While MD-based tumor delineation was not substantially different with DiffNet (AUC range: 0.888-0.902), DiffNet FA had higher AUC than conventional reconstructions for fixed scan time and achieved similar performance with shorter scans (conventional, six directions: AUC = 0.926, DiffNet, three directions: AUC = 0.920). CONCLUSION DiffNet improved DTI reconstruction accuracy, precision, and tumor delineation performance in GBM while permitting reconstruction from only three diffusion-encoding directions.&!#6.
Collapse
Affiliation(s)
- Eric Aliotta
- Department of Radiation Oncology, University of Virginia, Charlottesville, VA, 22908, USA
| | - Hamidreza Nourzadeh
- Department of Radiation Oncology, University of Virginia, Charlottesville, VA, 22908, USA
| | - Jason Sanders
- Department of Radiation Oncology, University of Virginia, Charlottesville, VA, 22908, USA
| | - Donald Muller
- Department of Radiation Oncology, University of Virginia, Charlottesville, VA, 22908, USA
| | - Daniel B Ennis
- Department of Radiology, Stanford University, Stanford, CA, 94305, USA
| |
Collapse
|
124
|
Soffer S, Ben-Cohen A, Shimon O, Amitai MM, Greenspan H, Klang E. Convolutional Neural Networks for Radiologic Images: A Radiologist's Guide. Radiology 2019; 290:590-606. [PMID: 30694159 DOI: 10.1148/radiol.2018180547] [Citation(s) in RCA: 305] [Impact Index Per Article: 50.8] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Deep learning has rapidly advanced in various fields within the past few years and has recently gained particular attention in the radiology community. This article provides an introduction to deep learning technology and presents the stages that are entailed in the design process of deep learning radiology research. In addition, the article details the results of a survey of the application of deep learning-specifically, the application of convolutional neural networks-to radiologic imaging that was focused on the following five major system organs: chest, breast, brain, musculoskeletal system, and abdomen and pelvis. The survey of the studies is followed by a discussion about current challenges and future trends and their potential implications for radiology. This article may be used as a guide for radiologists planning research in the field of radiologic image analysis using convolutional neural networks.
Collapse
Affiliation(s)
- Shelly Soffer
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Avi Ben-Cohen
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Orit Shimon
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Michal Marianne Amitai
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Hayit Greenspan
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| | - Eyal Klang
- From the Department of Diagnostic Imaging, Sheba Medical Center, Emek HaEla St 1, Ramat Gan, Israel (S.S., M.M.A., E.K.); Faculty of Engineering, Department of Biomedical Engineering, Medical Image Processing Laboratory, Tel Aviv University, Tel Aviv, Israel (A.B., H.G.); and Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel (S.S., O.S.)
| |
Collapse
|
125
|
Ulas C, Das D, Thrippleton MJ, Valdés Hernández MDC, Armitage PA, Makin SD, Wardlaw JM, Menze BH. Convolutional Neural Networks for Direct Inference of Pharmacokinetic Parameters: Application to Stroke Dynamic Contrast-Enhanced MRI. Front Neurol 2019; 9:1147. [PMID: 30671015 PMCID: PMC6331464 DOI: 10.3389/fneur.2018.01147] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2018] [Accepted: 12/11/2018] [Indexed: 12/12/2022] Open
Abstract
Background and Purpose: The T1-weighted dynamic contrast enhanced (DCE)-MRI is an imaging technique that provides a quantitative measure of pharmacokinetic (PK) parameters characterizing microvasculature of tissues. For the present study, we propose a new machine learning (ML) based approach to directly estimate the PK parameters from the acquired DCE-MRI image-time series that is both more robust and faster than conventional model fitting. Materials and Methods: We specifically utilize deep convolutional neural networks (CNNs) to learn the mapping between the image-time series and corresponding PK parameters. DCE-MRI datasets acquired from 15 patients with clinically evident mild ischaemic stroke were used in the experiments. Training and testing were carried out based on leave-one-patient-out cross- validation. The parameter estimates obtained by the proposed CNN model were compared against the two tracer kinetic models: (1) Patlak model, (2) Extended Tofts model, where the estimation of model parameters is done via voxelwise linear and nonlinear least squares fitting respectively. Results: The trained CNN model is able to yield PK parameters which can better discriminate different brain tissues, including stroke regions. The results also demonstrate that the model generalizes well to new cases even if a subject specific arterial input function (AIF) is not available for the new data. Conclusion: A ML-based model can be used for direct inference of the PK parameters from DCE image series. This method may allow fast and robust parameter inference in population DCE studies. Parameter inference on a 3D volume-time series takes only a few seconds on a GPU machine, which is significantly faster compared to conventional non-linear least squares fitting.
Collapse
Affiliation(s)
- Cagdas Ulas
- Department of Computer Science, Technische Universität München, Munich, Germany
| | - Dhritiman Das
- Department of Computer Science, Technische Universität München, Munich, Germany.,GE Global Research, Munich, Germany
| | - Michael J Thrippleton
- Department of Neuroimaging Sciences, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Maria Del C Valdés Hernández
- Department of Neuroimaging Sciences, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Paul A Armitage
- Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, United Kingdom
| | - Stephen D Makin
- Department of Neuroimaging Sciences, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Joanna M Wardlaw
- Department of Neuroimaging Sciences, Centre for Clinical Brain Sciences, University of Edinburgh, Edinburgh, United Kingdom
| | - Bjoern H Menze
- Department of Computer Science, Technische Universität München, Munich, Germany.,Institute of Advanced Study, Technische Universität München, Munich, Germany
| |
Collapse
|
126
|
|
127
|
Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drukker K, Cha KH, Summers RM, Giger ML. Deep learning in medical imaging and radiation therapy. Med Phys 2019; 46:e1-e36. [PMID: 30367497 PMCID: PMC9560030 DOI: 10.1002/mp.13264] [Citation(s) in RCA: 389] [Impact Index Per Article: 64.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2018] [Revised: 09/18/2018] [Accepted: 10/09/2018] [Indexed: 12/15/2022] Open
Abstract
The goals of this review paper on deep learning (DL) in medical imaging and radiation therapy are to (a) summarize what has been achieved to date; (b) identify common and unique challenges, and strategies that researchers have taken to address these challenges; and (c) identify some of the promising avenues for the future both in terms of applications as well as technical innovations. We introduce the general principles of DL and convolutional neural networks, survey five major areas of application of DL in medical imaging and radiation therapy, identify common themes, discuss methods for dataset expansion, and conclude by summarizing lessons learned, remaining challenges, and future directions.
Collapse
Affiliation(s)
- Berkman Sahiner
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Aria Pezeshk
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | | | - Xiaosong Wang
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | - Karen Drukker
- Department of RadiologyUniversity of ChicagoChicagoIL60637USA
| | - Kenny H. Cha
- DIDSR/OSEL/CDRH U.S. Food and Drug AdministrationSilver SpringMD20993USA
| | - Ronald M. Summers
- Imaging Biomarkers and Computer‐aided Diagnosis LabRadiology and Imaging SciencesNIH Clinical CenterBethesdaMD20892‐1182USA
| | | |
Collapse
|
128
|
|
129
|
Affiliation(s)
- Doohee Lee
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Jingu Lee
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Jingyu Ko
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Jaeyeon Yoon
- Laboratory for Imaging Science and Technology, Department of Electrical and Computer Engineering, Institute of Engineering Research, Seoul National University, Seoul, Korea
| | - Kanghyun Ryu
- Department of Electrical and Electronic Engineering, Yonsei University, Seoul, Korea
| | - Yoonho Nam
- Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
130
|
Tilborghs S, Dresselaers T, Claus P, Claessen G, Bogaert J, Maes F, Suetens P. Robust motion correction for cardiac T1 and ECV mapping using a T1 relaxation model approach. Med Image Anal 2018; 52:212-227. [PMID: 30597459 DOI: 10.1016/j.media.2018.12.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2018] [Revised: 10/19/2018] [Accepted: 12/17/2018] [Indexed: 02/06/2023]
Abstract
T1 and ECV mapping are quantitative methods for myocardial tissue characterization using cardiac MRI, and are highly relevant for the diagnosis of diffuse myocardial diseases. Since the maps are calculated pixel-by-pixel from a set of MRI images with different T1-weighting, it is critical to assure exact spatial correspondence between these images. However, in practice, different sources of motion e.g. cardiac motion, respiratory motion or patient motion, hamper accurate T1 and ECV calculation such that retrospective motion correction is required. We propose a new robust non-rigid registration framework combining a data-driven initialization with a model-based registration approach, which uses a model for T1 relaxation to avoid direct registration of images with highly varying contrast. The registration between native T1 and enhanced T1 to obtain a motion free ECV map is also calculated using information from T1 model-fitting. The method was validated on three datasets recorded with two substantially different acquisition protocols (MOLLI (dataset 1 (n=15) and dataset 2 (n=29)) and STONE (dataset 3 (n = 210))), one in breath-hold condition and one free-breathing. The average Dice coefficient increased from 72.6 ± 12.1% to 82.3 ± 7.4% (P < 0.05) and mean boundary error decreased from 2.91 ± 1.51mm to 1.62 ± 0.80mm (P < 0.05) for motion correction in a single T1-weighted image sequence (3 datasets) while average Dice coefficient increased from 63.4 ± 22.5% to 79.2 ± 8.5% (P < 0.05) and mean boundary error decreased from 3.26 ± 2.64mm to 1.77 ± 0.86mm (P < 0.05) between native and enhanced sequences (dataset 1 and 2). Overall, the native T1 SD error decreased from 67.32 ± 32.57ms to 58.11 ± 21.59ms (P < 0.05), enhanced SD error from 30.15 ± 25ms to 22.74 ± 8.94ms (P < 0.05) and ECV SD error from 10.08 ± 9.59% to 5.42 ± 3.21% (P < 0.05) (dataset 1 and 2).
Collapse
Affiliation(s)
- Sofie Tilborghs
- Department of Electrical Engineering, ESAT/PSI, KU Leuven, Leuven, Belgium; Medical Imaging Research Center, UZ Leuven, Herestraat 49 - 7003, Leuven, 3000, Belgium.
| | - Tom Dresselaers
- Department of Imaging and Pathology, Radiology, KU Leuven, Leuven, Belgium; Medical Imaging Research Center, UZ Leuven, Herestraat 49 - 7003, Leuven, 3000, Belgium
| | - Piet Claus
- Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium; Medical Imaging Research Center, UZ Leuven, Herestraat 49 - 7003, Leuven, 3000, Belgium
| | - Guido Claessen
- Department of Cardiovascular Sciences, KU Leuven, Leuven, Belgium
| | - Jan Bogaert
- Department of Imaging and Pathology, Radiology, KU Leuven, Leuven, Belgium; Medical Imaging Research Center, UZ Leuven, Herestraat 49 - 7003, Leuven, 3000, Belgium
| | - Frederik Maes
- Department of Electrical Engineering, ESAT/PSI, KU Leuven, Leuven, Belgium; Medical Imaging Research Center, UZ Leuven, Herestraat 49 - 7003, Leuven, 3000, Belgium
| | - Paul Suetens
- Department of Electrical Engineering, ESAT/PSI, KU Leuven, Leuven, Belgium; Medical Imaging Research Center, UZ Leuven, Herestraat 49 - 7003, Leuven, 3000, Belgium
| |
Collapse
|
131
|
Lundervold AS, Lundervold A. An overview of deep learning in medical imaging focusing on MRI. Z Med Phys 2018; 29:102-127. [PMID: 30553609 DOI: 10.1016/j.zemedi.2018.11.002] [Citation(s) in RCA: 771] [Impact Index Per Article: 110.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2018] [Revised: 11/19/2018] [Accepted: 11/21/2018] [Indexed: 02/06/2023]
Abstract
What has happened in machine learning lately, and what does it mean for the future of medical image analysis? Machine learning has witnessed a tremendous amount of attention over the last few years. The current boom started around 2009 when so-called deep artificial neural networks began outperforming other established models on a number of important benchmarks. Deep neural networks are now the state-of-the-art machine learning models across a variety of areas, from image analysis to natural language processing, and widely deployed in academia and industry. These developments have a huge potential for medical imaging technology, medical data analysis, medical diagnostics and healthcare in general, slowly being realized. We provide a short overview of recent advances and some associated challenges in machine learning applied to medical image processing and image analysis. As this has become a very broad and fast expanding field we will not survey the entire landscape of applications, but put particular focus on deep learning in MRI. Our aim is threefold: (i) give a brief introduction to deep learning with pointers to core references; (ii) indicate how deep learning has been applied to the entire MRI processing chain, from acquisition to image retrieval, from segmentation to disease prediction; (iii) provide a starting point for people interested in experimenting and perhaps contributing to the field of deep learning for medical imaging by pointing out good educational resources, state-of-the-art open-source code, and interesting sources of data and problems related medical imaging.
Collapse
Affiliation(s)
- Alexander Selvikvåg Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Department of Computing, Mathematics and Physics, Western Norway University of Applied Sciences, Norway.
| | - Arvid Lundervold
- Mohn Medical Imaging and Visualization Centre (MMIV), Haukeland University Hospital, Norway; Neuroinformatics and Image Analysis Laboratory, Department of Biomedicine, University of Bergen, Norway; Department of Health and Functioning, Western Norway University of Applied Sciences, Norway.
| |
Collapse
|
132
|
Improvement of image quality at CT and MRI using deep learning. Jpn J Radiol 2018; 37:73-80. [PMID: 30498876 DOI: 10.1007/s11604-018-0796-2] [Citation(s) in RCA: 122] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2018] [Accepted: 11/18/2018] [Indexed: 12/22/2022]
Abstract
Deep learning has been developed by computer scientists. Here, we discuss techniques for improving the image quality of diagnostic computed tomography and magnetic resonance imaging with the aid of deep learning. We categorize the techniques for improving the image quality as "noise and artifact reduction", "super resolution" and "image acquisition and reconstruction". For each category, we present and outline the features of some studies.
Collapse
|
133
|
Jones DK, Alexander DC, Bowtell R, Cercignani M, Dell'Acqua F, McHugh DJ, Miller KL, Palombo M, Parker GJM, Rudrapatna US, Tax CMW. Microstructural imaging of the human brain with a 'super-scanner': 10 key advantages of ultra-strong gradients for diffusion MRI. Neuroimage 2018; 182:8-38. [PMID: 29793061 DOI: 10.1016/j.neuroimage.2018.05.047] [Citation(s) in RCA: 105] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2017] [Revised: 05/17/2018] [Accepted: 05/18/2018] [Indexed: 12/13/2022] Open
Abstract
The key component of a microstructural diffusion MRI 'super-scanner' is a dedicated high-strength gradient system that enables stronger diffusion weightings per unit time compared to conventional gradient designs. This can, in turn, drastically shorten the time needed for diffusion encoding, increase the signal-to-noise ratio, and facilitate measurements at shorter diffusion times. This review, written from the perspective of the UK National Facility for In Vivo MR Imaging of Human Tissue Microstructure, an initiative to establish a shared 300 mT/m-gradient facility amongst the microstructural imaging community, describes ten advantages of ultra-strong gradients for microstructural imaging. Specifically, we will discuss how the increase of the accessible measurement space compared to a lower-gradient systems (in terms of Δ, b-value, and TE) can accelerate developments in the areas of 1) axon diameter distribution mapping; 2) microstructural parameter estimation; 3) mapping micro-vs macroscopic anisotropy features with gradient waveforms beyond a single pair of pulsed-gradients; 4) multi-contrast experiments, e.g. diffusion-relaxometry; 5) tractography and high-resolution imaging in vivo and 6) post mortem; 7) diffusion-weighted spectroscopy of metabolites other than water; 8) tumour characterisation; 9) functional diffusion MRI; and 10) quality enhancement of images acquired on lower-gradient systems. We finally discuss practical barriers in the use of ultra-strong gradients, and provide an outlook on the next generation of 'super-scanners'.
Collapse
Affiliation(s)
- D K Jones
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Maindy Road, Cardiff, CF24 4HQ, UK; School of Psychology, Faculty of Health Sciences, Australian Catholic University, Melbourne, Victoria, 3065, Australia.
| | - D C Alexander
- Centre for Medical Image Computing (CMIC), Department of Computer Science, UCL (University College London), Gower Street, London, UK; Clinical Imaging Research Centre, National University of Singapore, Singapore
| | - R Bowtell
- Sir Peter Mansfield Magnetic Resonance Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - M Cercignani
- Department of Psychiatry, Brighton and Sussex Medical School, Brighton, UK
| | - F Dell'Acqua
- Natbrainlab, Department of Neuroimaging, King's College London, London, UK
| | - D J McHugh
- Division of Informatics, Imaging and Data Sciences, The University of Manchester, Manchester, UK; CRUK and EPSRC Cancer Imaging Centre in Cambridge and Manchester, Cambridge and Manchester, UK
| | - K L Miller
- Oxford Centre for Functional MRI of the Brain, University of Oxford, Oxford, UK
| | - M Palombo
- Centre for Medical Image Computing (CMIC), Department of Computer Science, UCL (University College London), Gower Street, London, UK
| | - G J M Parker
- Division of Informatics, Imaging and Data Sciences, The University of Manchester, Manchester, UK; CRUK and EPSRC Cancer Imaging Centre in Cambridge and Manchester, Cambridge and Manchester, UK; Bioxydyn Ltd., Manchester, UK
| | - U S Rudrapatna
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Maindy Road, Cardiff, CF24 4HQ, UK
| | - C M W Tax
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Maindy Road, Cardiff, CF24 4HQ, UK
| |
Collapse
|
134
|
Gibbons EK, Hodgson KK, Chaudhari AS, Richards LG, Majersik JJ, Adluru G, DiBella EVR. Simultaneous NODDI and GFA parameter map generation from subsampled q-space imaging using deep learning. Magn Reson Med 2018; 81:2399-2411. [PMID: 30426558 DOI: 10.1002/mrm.27568] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Revised: 09/20/2018] [Accepted: 09/23/2018] [Indexed: 12/11/2022]
Abstract
PURPOSE To develop a robust multidimensional deep-learning based method to simultaneously generate accurate neurite orientation dispersion and density imaging (NODDI) and generalized fractional anisotropy (GFA) parameter maps from undersampled q-space datasets for use in stroke imaging. METHODS Traditional diffusion spectrum imaging (DSI) capable of producing accurate NODDI and GFA parameter maps requires hundreds of q-space samples which renders the scan time clinically untenable. A convolutional neural network (CNN) was trained to generated NODDI and GFA parameter maps simultaneously from 10× undersampled q-space data. A total of 48 DSI scans from 15 stroke patients and 14 normal subjects were acquired for training, validating, and testing this method. The proposed network was compared to previously proposed voxel-wise machine learning based approaches for q-space imaging. Network-generated images were used to predict stroke functional outcome measures. RESULTS The proposed network achieves significant performance advantages compared to previously proposed machine learning approaches, showing significant improvements across image quality metrics. Generating these parameter maps using CNNs also comes with the computational benefits of only needing to generate and train a single network instead of multiple networks for each parameter type. Post-stroke outcome prediction metrics do not appreciably change when using images generated from this proposed technique. Over three test participants, the predicted stroke functional outcome scores were within 1-6% of the clinical evaluations. CONCLUSIONS Estimates of NODDI and GFA parameters estimated simultaneously with a deep learning network from highly undersampled q-space data were improved compared to other state-of-the-art methods providing a 10-fold reduction scan time compared to conventional methods.
Collapse
Affiliation(s)
- Eric K Gibbons
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah
| | - Kyler K Hodgson
- Department of Bioengineering, University of Utah, Salt Lake City, Utah
| | | | - Lorie G Richards
- Department of Occupational and Recreational Therapies, University of Utah, Salt Lake City, Utah
| | | | - Ganesh Adluru
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah
| | - Edward V R DiBella
- Department of Radiology and Imaging Sciences, University of Utah, Salt Lake City, Utah
| |
Collapse
|
135
|
Cenek M, Hu M, York G, Dahl S. Survey of Image Processing Techniques for Brain Pathology Diagnosis: Challenges and Opportunities. Front Robot AI 2018; 5:120. [PMID: 33500999 PMCID: PMC7805910 DOI: 10.3389/frobt.2018.00120] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Accepted: 09/24/2018] [Indexed: 12/30/2022] Open
Abstract
In recent years, a number of new products introduced to the global market combine intelligent robotics, artificial intelligence and smart interfaces to provide powerful tools to support professional decision making. However, while brain disease diagnosis from the brain scan images is supported by imaging robotics, the data analysis to form a medical diagnosis is performed solely by highly trained medical professionals. Recent advances in medical imaging techniques, artificial intelligence, machine learning and computer vision present new opportunities to build intelligent decision support tools to aid the diagnostic process, increase the disease detection accuracy, reduce error, automate the monitoring of patient's recovery, and discover new knowledge about the disease cause and its treatment. This article introduces the topic of medical diagnosis of brain diseases from the MRI based images. We describe existing, multi-modal imaging techniques of the brain's soft tissue and describe in detail how are the resulting images are analyzed by a radiologist to form a diagnosis. Several comparisons between the best results of classifying natural scenes and medical image analysis illustrate the challenges of applying existing image processing techniques to the medical image analysis domain. The survey of medical image processing methods also identified several knowledge gaps, the need for automation of image processing analysis, and the identification of the brain structures in the medical images that differentiate healthy tissue from a pathology. This survey is grounded in the cases of brain tumor analysis and the traumatic brain injury diagnoses, as these two case studies illustrate the vastly different approaches needed to define, extract, and synthesize meaningful information from multiple MRI image sets for a diagnosis. Finally, the article summarizes artificial intelligence frameworks that are built as multi-stage, hybrid, hierarchical information processing work-flows and the benefits of applying these models for medical diagnosis to build intelligent physician's aids with knowledge transparency, expert knowledge embedding, and increased analytical quality.
Collapse
Affiliation(s)
- Martin Cenek
- Department of Computer Science, University of Portland, Portland, OR, United States
| | - Masa Hu
- Department of Computer Science, University of Portland, Portland, OR, United States
| | - Gerald York
- TBI Imaging and Research, Alaska Radiology Associates, Anchorage, AK, United States
| | - Spencer Dahl
- Columbia College, Columbia University, New York, NY, United States
| |
Collapse
|
136
|
Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2018; 2:35. [PMID: 30353365 PMCID: PMC6199205 DOI: 10.1186/s41747-018-0061-6] [Citation(s) in RCA: 359] [Impact Index Per Article: 51.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2018] [Accepted: 07/31/2018] [Indexed: 02/08/2023] Open
Abstract
One of the most promising areas of health innovation is the application of artificial intelligence (AI), primarily in medical imaging. This article provides basic definitions of terms such as “machine/deep learning” and analyses the integration of AI into radiology. Publications on AI have drastically increased from about 100–150 per year in 2007–2008 to 700–800 per year in 2016–2017. Magnetic resonance imaging and computed tomography collectively account for more than 50% of current articles. Neuroradiology appears in about one-third of the papers, followed by musculoskeletal, cardiovascular, breast, urogenital, lung/thorax, and abdomen, each representing 6–9% of articles. With an irreversible increase in the amount of data and the possibility to use AI to identify findings either detectable or not by the human eye, radiology is now moving from a subjective perceptual skill to a more objective science. Radiologists, who were on the forefront of the digital era in medicine, can guide the introduction of AI into healthcare. Yet, they will not be replaced because radiology includes communication of diagnosis, consideration of patient’s values and preferences, medical judgment, quality assurance, education, policy-making, and interventional procedures. The higher efficiency provided by AI will allow radiologists to perform more value-added tasks, becoming more visible to patients and playing a vital role in multidisciplinary clinical teams.
Collapse
Affiliation(s)
- Filippo Pesapane
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122, Milan, Italy
| | - Marina Codari
- Unit of Radiology, IRCCS Policlinico San Donato, Via Morandi 30, 20097 San Donato Milanese, Milan, Italy.
| | - Francesco Sardanelli
- Unit of Radiology, IRCCS Policlinico San Donato, Via Morandi 30, 20097 San Donato Milanese, Milan, Italy.,Department of Biomedical Sciences for Health, Università degli Studi di Milano, Via Morandi 30, 20097 San Donato Milanese, Milan, Italy
| |
Collapse
|
137
|
Zaharchuk G, Gong E, Wintermark M, Rubin D, Langlotz CP. Deep Learning in Neuroradiology. AJNR Am J Neuroradiol 2018; 39:1776-1784. [PMID: 29419402 PMCID: PMC7410723 DOI: 10.3174/ajnr.a5543] [Citation(s) in RCA: 184] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Deep learning is a form of machine learning using a convolutional neural network architecture that shows tremendous promise for imaging applications. It is increasingly being adapted from its original demonstration in computer vision applications to medical imaging. Because of the high volume and wealth of multimodal imaging information acquired in typical studies, neuroradiology is poised to be an early adopter of deep learning. Compelling deep learning research applications have been demonstrated, and their use is likely to grow rapidly. This review article describes the reasons, outlines the basic methods used to train and test deep learning models, and presents a brief overview of current and potential clinical applications with an emphasis on how they are likely to change future neuroradiology practice. Facility with these methods among neuroimaging researchers and clinicians will be important to channel and harness the vast potential of this new method.
Collapse
Affiliation(s)
- G Zaharchuk
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| | - E Gong
- Electrical Engineering (E.G.), Stanford University and Stanford University Medical Center, Stanford, California
| | - M Wintermark
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| | - D Rubin
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| | - C P Langlotz
- From the Departments of Radiology (G.Z., M.W., D.R., C.P.L.)
| |
Collapse
|
138
|
Huff TJ, Ludwig PE, Zuniga JM. The potential for machine learning algorithms to improve and reduce the cost of 3-dimensional printing for surgical planning. Expert Rev Med Devices 2018; 15:349-356. [PMID: 29723481 DOI: 10.1080/17434440.2018.1473033] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
INTRODUCTION 3D-printed anatomical models play an important role in medical and research settings. The recent successes of 3D anatomical models in healthcare have led many institutions to adopt the technology. However, there remain several issues that must be addressed before it can become more wide-spread. Of importance are the problems of cost and time of manufacturing. Machine learning (ML) could be utilized to solve these issues by streamlining the 3D modeling process through rapid medical image segmentation and improved patient selection and image acquisition. The current challenges, potential solutions, and future directions for ML and 3D anatomical modeling in healthcare are discussed. AREAS COVERED This review covers research articles in the field of machine learning as related to 3D anatomical modeling. Topics discussed include automated image segmentation, cost reduction, and related time constraints. EXPERT COMMENTARY ML-based segmentation of medical images could potentially improve the process of 3D anatomical modeling. However, until more research is done to validate these technologies in clinical practice, their impact on patient outcomes will remain unknown. We have the necessary computational tools to tackle the problems discussed. The difficulty now lies in our ability to collect sufficient data.
Collapse
Affiliation(s)
- Trevor J Huff
- a Creighton University School of Medicine , Omaha , USA
| | | | - Jorge M Zuniga
- b Department of Biomechanics , University of Nebraska at Omaha , USA.,c Facultad de Ciencias de la Salud , Universidad Autónoma de Chile , Chil
| |
Collapse
|
139
|
Artificial intelligence as a medical device in radiology: ethical and regulatory issues in Europe and the United States. Insights Imaging 2018; 9:745-753. [PMID: 30112675 PMCID: PMC6206380 DOI: 10.1007/s13244-018-0645-y] [Citation(s) in RCA: 216] [Impact Index Per Article: 30.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 06/18/2018] [Accepted: 06/28/2018] [Indexed: 12/13/2022] Open
Abstract
Abstract Worldwide interest in artificial intelligence (AI) applications is growing rapidly. In medicine, devices based on machine/deep learning have proliferated, especially for image analysis, presaging new significant challenges for the utility of AI in healthcare. This inevitably raises numerous legal and ethical questions. In this paper we analyse the state of AI regulation in the context of medical device development, and strategies to make AI applications safe and useful in the future. We analyse the legal framework regulating medical devices and data protection in Europe and in the United States, assessing developments that are currently taking place. The European Union (EU) is reforming these fields with new legislation (General Data Protection Regulation [GDPR], Cybersecurity Directive, Medical Devices Regulation, In Vitro Diagnostic Medical Device Regulation). This reform is gradual, but it has now made its first impact, with the GDPR and the Cybersecurity Directive having taken effect in May, 2018. As regards the United States (U.S.), the regulatory scene is predominantly controlled by the Food and Drug Administration. This paper considers issues of accountability, both legal and ethical. The processes of medical device decision-making are largely unpredictable, therefore holding the creators accountable for it clearly raises concerns. There is a lot that can be done in order to regulate AI applications. If this is done properly and timely, the potentiality of AI based technology, in radiology as well as in other fields, will be invaluable. Teaching Points • AI applications are medical devices supporting detection/diagnosis, work-flow, cost-effectiveness. • Regulations for safety, privacy protection, and ethical use of sensitive information are needed. • EU and U.S. have different approaches for approving and regulating new medical devices. • EU laws consider cyberattacks, incidents (notification and minimisation), and service continuity. • U.S. laws ask for opt-in data processing and use as well as for clear consumer consent.
Collapse
|
140
|
Kim J, Hong J, Park H. Prospects of deep learning for medical imaging. PRECISION AND FUTURE MEDICINE 2018; 2:37-52. [DOI: 10.23838/pfm.2018.00030] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Accepted: 04/14/2018] [Indexed: 08/29/2023] Open
|
141
|
Choy G, Khalilzadeh O, Michalski M, Do S, Samir AE, Pianykh OS, Geis JR, Pandharipande PV, Brink JA, Dreyer KJ. Current Applications and Future Impact of Machine Learning in Radiology. Radiology 2018; 288:318-328. [PMID: 29944078 DOI: 10.1148/radiol.2018171820] [Citation(s) in RCA: 469] [Impact Index Per Article: 67.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Recent advances and future perspectives of machine learning techniques offer promising applications in medical imaging. Machine learning has the potential to improve different steps of the radiology workflow including order scheduling and triage, clinical decision support systems, detection and interpretation of findings, postprocessing and dose estimation, examination quality control, and radiology reporting. In this article, the authors review examples of current applications of machine learning and artificial intelligence techniques in diagnostic radiology. In addition, the future impact and natural extension of these techniques in radiology practice are discussed.
Collapse
Affiliation(s)
- Garry Choy
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Omid Khalilzadeh
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Mark Michalski
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Synho Do
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Anthony E Samir
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Oleg S Pianykh
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - J Raymond Geis
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Pari V Pandharipande
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - James A Brink
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| | - Keith J Dreyer
- From the Department of Radiology, Massachusetts General Hospital, Harvard Medical School, 55 Fruit St, Boston, Mass 02114 (G.C., O.K., M.M., S.D., A.E.S., O.S.P., P.V.P., J.A.B., K.J.D.); and Department of Radiology, University of Colorado School of Medicine, Aurora, Colo (J.R.G.)
| |
Collapse
|
142
|
Hammernik K, Klatzer T, Kobler E, Recht MP, Sodickson DK, Pock T, Knoll F. Learning a variational network for reconstruction of accelerated MRI data. Magn Reson Med 2018; 79:3055-3071. [PMID: 29115689 PMCID: PMC5902683 DOI: 10.1002/mrm.26977] [Citation(s) in RCA: 784] [Impact Index Per Article: 112.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2017] [Revised: 09/19/2017] [Accepted: 09/27/2017] [Indexed: 01/14/2023]
Abstract
PURPOSE To allow fast and high-quality reconstruction of clinical accelerated multi-coil MR data by learning a variational network that combines the mathematical structure of variational models with deep learning. THEORY AND METHODS Generalized compressed sensing reconstruction formulated as a variational model is embedded in an unrolled gradient descent scheme. All parameters of this formulation, including the prior model defined by filter kernels and activation functions as well as the data term weights, are learned during an offline training procedure. The learned model can then be applied online to previously unseen data. RESULTS The variational network approach is evaluated on a clinical knee imaging protocol for different acceleration factors and sampling patterns using retrospectively and prospectively undersampled data. The variational network reconstructions outperform standard reconstruction algorithms, verified by quantitative error measures and a clinical reader study for regular sampling and acceleration factor 4. CONCLUSION Variational network reconstructions preserve the natural appearance of MR images as well as pathologies that were not included in the training data set. Due to its high computational performance, that is, reconstruction time of 193 ms on a single graphics card, and the omission of parameter tuning once the network is trained, this new approach to image reconstruction can easily be integrated into clinical workflow. Magn Reson Med 79:3055-3071, 2018. © 2017 International Society for Magnetic Resonance in Medicine.
Collapse
Affiliation(s)
- Kerstin Hammernik
- Institute of Computer Graphics and Vision, Graz University of
Technology, Graz, Austria
| | - Teresa Klatzer
- Institute of Computer Graphics and Vision, Graz University of
Technology, Graz, Austria
| | - Erich Kobler
- Institute of Computer Graphics and Vision, Graz University of
Technology, Graz, Austria
| | - Michael P Recht
- Center for Biomedical Imaging, Department of Radiology, NYU School
of Medicine, New York, NY, United States
- Center for Advanced Imaging Innovation and Research
(CAIR), NYU School of Medicine, New York, NY, United States
| | - Daniel K Sodickson
- Center for Biomedical Imaging, Department of Radiology, NYU School
of Medicine, New York, NY, United States
- Center for Advanced Imaging Innovation and Research
(CAIR), NYU School of Medicine, New York, NY, United States
| | - Thomas Pock
- Institute of Computer Graphics and Vision, Graz University of
Technology, Graz, Austria
- Center for Vision, Automation & Control, AIT Austrian
Institute of Technology GmbH, Vienna, Austria
| | - Florian Knoll
- Center for Biomedical Imaging, Department of Radiology, NYU School
of Medicine, New York, NY, United States
- Center for Advanced Imaging Innovation and Research
(CAIR), NYU School of Medicine, New York, NY, United States
| |
Collapse
|
143
|
Savalia S, Emamian V. Cardiac Arrhythmia Classification by Multi-Layer Perceptron and Convolution Neural Networks. Bioengineering (Basel) 2018; 5:E35. [PMID: 29734666 PMCID: PMC6027502 DOI: 10.3390/bioengineering5020035] [Citation(s) in RCA: 43] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2018] [Revised: 04/18/2018] [Accepted: 04/28/2018] [Indexed: 12/02/2022] Open
Abstract
The electrocardiogram (ECG) plays an imperative role in the medical field, as it records heart signal over time and is used to discover numerous cardiovascular diseases. If a documented ECG signal has a certain irregularity in its predefined features, this is called arrhythmia, the types of which include tachycardia, bradycardia, supraventricular arrhythmias, and ventricular, etc. This has encouraged us to do research that consists of distinguishing between several arrhythmias by using deep neural network algorithms such as multi-layer perceptron (MLP) and convolution neural network (CNN). The TensorFlow library that was established by Google for deep learning and machine learning is used in python to acquire the algorithms proposed here. The ECG databases accessible at PhysioBank.com and kaggle.com were used for training, testing, and validation of the MLP and CNN algorithms. The proposed algorithm consists of four hidden layers with weights, biases in MLP, and four-layer convolution neural networks which map ECG samples to the different classes of arrhythmia. The accuracy of the algorithm surpasses the performance of the current algorithms that have been developed by other cardiologists in both sensitivity and precision.
Collapse
Affiliation(s)
- Shalin Savalia
- Department of Electrical Engineering, St. Mary's University, 1 Camino Santa Maria, San Antonio, TX 78228, USA.
| | - Vahid Emamian
- School of Science, Engineering and Technology, St. Mary's University, San Antonio, TX 78228, USA.
| |
Collapse
|
144
|
Cai C, Wang C, Zeng Y, Cai S, Liang D, Wu Y, Chen Z, Ding X, Zhong J. Single‐shot T
2
mapping using overlapping‐echo detachment planar imaging and a deep convolutional neural network. Magn Reson Med 2018; 80:2202-2214. [DOI: 10.1002/mrm.27205] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2018] [Revised: 02/28/2018] [Accepted: 03/11/2018] [Indexed: 12/28/2022]
Affiliation(s)
- Congbo Cai
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic ResonanceXiamen UniversityXiamen China
- Department of Communication EngineeringXiamen UniversityXiamen China
| | - Chao Wang
- Department of Communication EngineeringXiamen UniversityXiamen China
| | - Yiqing Zeng
- Department of Communication EngineeringXiamen UniversityXiamen China
| | - Shuhui Cai
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic ResonanceXiamen UniversityXiamen China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, CASShenzhen China
| | - Yawen Wu
- Department of Communication EngineeringXiamen UniversityXiamen China
| | - Zhong Chen
- Department of Electronic Science, Fujian Provincial Key Laboratory of Plasma and Magnetic ResonanceXiamen UniversityXiamen China
| | - Xinghao Ding
- Department of Communication EngineeringXiamen UniversityXiamen China
| | - Jianhui Zhong
- Department of Imaging SciencesUniversity of RochesterRochester New York
- The Center for Brain Imaging Science and Technology and Collaborative Innovation Center for Diagnosis and Treatment of Infectious DiseasesZhejiang UniversityHangzhou China
| |
Collapse
|
145
|
Varadarajan D, Haldar JP. TOWARDS OPTIMAL LINEAR ESTIMATION OF ORIENTATION DISTRIBUTION FUNCTIONS WITH ARBITRARILY SAMPLED DIFFUSION MRI DATA. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2018; 2018:743-746. [PMID: 30956753 PMCID: PMC6448790 DOI: 10.1109/isbi.2018.8363680] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The estimation of orientation distribution functions (ODFs) from diffusion MRI data is an important step in diffusion tractography, but existing estimation methods often depend on signal modeling assumptions that are violated by real data, lack theoretical characterization, and/or are only applicable to a small range of q-space sampling patterns. As a result, existing ODF estimation methods may be suboptimal. In this work, we propose a novel ODF estimation approach that learns a linear ODF estimator from training data. The training set contains ideal data samples paired with corresponding ideal ODFs, and the learning procedure reduces to a simple linear least-squares problem. This approach can accommodate arbitrary q-space sampling schemes, can be characterized theoretically, and is theoretically demonstrated to generalize far beyond the training set. The proposed approach is evaluated with simulated and in vivo diffusion data, where it is demonstrated to outperform common alternatives.
Collapse
Affiliation(s)
- Divya Varadarajan
- Signal and Image Processing Institute, University of Southern California, Los Angeles, CA 90089
| | - Justin P Haldar
- Signal and Image Processing Institute, University of Southern California, Los Angeles, CA 90089
| |
Collapse
|
146
|
Gong E, Pauly JM, Wintermark M, Zaharchuk G. Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging 2018; 48:330-340. [PMID: 29437269 DOI: 10.1002/jmri.25970] [Citation(s) in RCA: 211] [Impact Index Per Article: 30.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Accepted: 01/25/2018] [Indexed: 11/08/2022] Open
Abstract
BACKGROUND There are concerns over gadolinium deposition from gadolinium-based contrast agents (GBCA) administration. PURPOSE To reduce gadolinium dose in contrast-enhanced brain MRI using a deep learning method. STUDY TYPE Retrospective, crossover. POPULATION Sixty patients receiving clinically indicated contrast-enhanced brain MRI. SEQUENCE 3D T1 -weighted inversion-recovery prepped fast-spoiled-gradient-echo (IR-FSPGR) imaging was acquired at both 1.5T and 3T. In 60 brain MRI exams, the IR-FSPGR sequence was obtained under three conditions: precontrast, postcontrast images with 10% low-dose (0.01mmol/kg) and 100% full-dose (0.1 mmol/kg) of gadobenate dimeglumine. We trained a deep learning model using the first 10 cases (with mixed indications) to approximate full-dose images from the precontrast and low-dose images. Synthesized full-dose images were created using the trained model in two test sets: 20 patients with mixed indications and 30 patients with glioma. ASSESSMENT For both test sets, low-dose, true full-dose, and the synthesized full-dose postcontrast image sets were compared quantitatively using peak-signal-to-noise-ratios (PSNR) and structural-similarity-index (SSIM). For the test set comprised of 20 patients with mixed indications, two neuroradiologists scored blindly and independently for the three postcontrast image sets, evaluating image quality, motion-artifact suppression, and contrast enhancement compared with precontrast images. STATISTICAL ANALYSIS Results were assessed using paired t-tests and noninferiority tests. RESULTS The proposed deep learning method yielded significant (n = 50, P < 0.001) improvements over the low-dose images (>5 dB PSNR gains and >11.0% SSIM). Ratings on image quality (n = 20, P = 0.003) and contrast enhancement (n = 20, P < 0.001) were significantly increased. Compared to true full-dose images, the synthesized full-dose images have a slight but not significant reduction in image quality (n = 20, P = 0.083) and contrast enhancement (n = 20, P = 0.068). Slightly better (n = 20, P = 0.039) motion-artifact suppression was noted in the synthesized images. The noninferiority test rejects the inferiority of the synthesized to true full-dose images for image quality (95% CI: -14-9%), artifacts suppression (95% CI: -5-20%), and contrast enhancement (95% CI: -13-6%). DATA CONCLUSION With the proposed deep learning method, gadolinium dose can be reduced 10-fold while preserving contrast information and avoiding significant image quality degradation. LEVEL OF EVIDENCE 3 Technical Efficacy: Stage 5 J. MAGN. RESON. IMAGING 2018;48:330-340.
Collapse
Affiliation(s)
- Enhao Gong
- Department of Electrical Engineering, Stanford University, Stanford, California, USA.,Department of Radiology, Stanford University, Stanford, California, USA
| | - John M Pauly
- Department of Electrical Engineering, Stanford University, Stanford, California, USA
| | - Max Wintermark
- Department of Radiology, Stanford University, Stanford, California, USA
| | - Greg Zaharchuk
- Department of Radiology, Stanford University, Stanford, California, USA
| |
Collapse
|
147
|
Ye F. Particle swarm optimization-based automatic parameter selection for deep neural networks and its applications in large-scale and high-dimensional data. PLoS One 2017; 12:e0188746. [PMID: 29236718 PMCID: PMC5728507 DOI: 10.1371/journal.pone.0188746] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2017] [Accepted: 10/02/2017] [Indexed: 01/02/2023] Open
Abstract
In this paper, we propose a new automatic hyperparameter selection approach for determining the optimal network configuration (network structure and hyperparameters) for deep neural networks using particle swarm optimization (PSO) in combination with a steepest gradient descent algorithm. In the proposed approach, network configurations were coded as a set of real-number m-dimensional vectors as the individuals of the PSO algorithm in the search procedure. During the search procedure, the PSO algorithm is employed to search for optimal network configurations via the particles moving in a finite search space, and the steepest gradient descent algorithm is used to train the DNN classifier with a few training epochs (to find a local optimal solution) during the population evaluation of PSO. After the optimization scheme, the steepest gradient descent algorithm is performed with more epochs and the final solutions (pbest and gbest) of the PSO algorithm to train a final ensemble model and individual DNN classifiers, respectively. The local search ability of the steepest gradient descent algorithm and the global search capabilities of the PSO algorithm are exploited to determine an optimal solution that is close to the global optimum. We constructed several experiments on hand-written characters and biological activity prediction datasets to show that the DNN classifiers trained by the network configurations expressed by the final solutions of the PSO algorithm, employed to construct an ensemble model and individual classifier, outperform the random approach in terms of the generalization performance. Therefore, the proposed approach can be regarded an alternative tool for automatic network structure and parameter selection for deep neural networks.
Collapse
Affiliation(s)
- Fei Ye
- School of information science and technology, Southwest Jiaotong University, ChengDu, China
| |
Collapse
|
148
|
Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42:60-88. [PMID: 28778026 DOI: 10.1016/j.media.2017.07.005] [Citation(s) in RCA: 4750] [Impact Index Per Article: 593.8] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 07/24/2017] [Accepted: 07/25/2017] [Indexed: 02/07/2023]
Affiliation(s)
- Geert Litjens
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands.
| | - Thijs Kooi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | | | - Francesco Ciompi
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mohsen Ghafoorian
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | | | - Bram van Ginneken
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Clara I Sánchez
- Diagnostic Image Analysis Group, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
149
|
Lakhani P, Prater AB, Hutson RK, Andriole KP, Dreyer KJ, Morey J, Prevedello LM, Clark TJ, Geis JR, Itri JN, Hawkins CM. Machine Learning in Radiology: Applications Beyond Image Interpretation. J Am Coll Radiol 2017; 15:350-359. [PMID: 29158061 DOI: 10.1016/j.jacr.2017.09.044] [Citation(s) in RCA: 139] [Impact Index Per Article: 17.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Revised: 09/21/2017] [Accepted: 09/30/2017] [Indexed: 12/18/2022]
Abstract
Much attention has been given to machine learning and its perceived impact in radiology, particularly in light of recent success with image classification in international competitions. However, machine learning is likely to impact radiology outside of image interpretation long before a fully functional "machine radiologist" is implemented in practice. Here, we describe an overview of machine learning, its application to radiology and other domains, and many cases of use that do not involve image interpretation. We hope that better understanding of these potential applications will help radiology practices prepare for the future and realize performance improvement and efficiency gains.
Collapse
Affiliation(s)
- Paras Lakhani
- Department of Radiology, Thomas Jefferson University Hospital, Sidney Kimmel Jefferson Medical College, Philadelphia, Pennsylvania.
| | - Adam B Prater
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia
| | - R Kent Hutson
- Radiology Alliance, Colorado Springs, Colorado; Medical Center Radiologists, Virginia Beach, Virginia
| | - Kathy P Andriole
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, Massachusetts
| | - Keith J Dreyer
- Department of Radiology, Massachusetts General Hospital, Harvard Medical School Boston, Massachusetts
| | - Jose Morey
- I.B.M. Watson Research, Yorktown Heights, New York; Department of Radiology, University of Virginia, Charlottesville, Virginia; Medical Center Radiologists, Virginia Beach, Virginia
| | | | - Toshi J Clark
- University of Colorado Medical Center, Denver, Colorado
| | | | - Jason N Itri
- Department of Radiology, University of Virginia, Charlottesville, Virginia
| | - C Matthew Hawkins
- Department of Radiology and Imaging Sciences, Emory University School of Medicine, Atlanta, Georgia
| |
Collapse
|
150
|
Kwon K, Kim D, Park H. A parallel MR imaging method using multilayer perceptron. Med Phys 2017; 44:6209-6224. [DOI: 10.1002/mp.12600] [Citation(s) in RCA: 89] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2017] [Revised: 09/17/2017] [Accepted: 09/18/2017] [Indexed: 11/08/2022] Open
Affiliation(s)
- Kinam Kwon
- Department of Electrical Engineering; Korea Advanced Institute of Science and Technology (KAIST); Daejeon South Korea
| | - Dongchan Kim
- College of Medicine; Gachon University; Incheon South Korea
| | - HyunWook Park
- Department of Electrical Engineering; Korea Advanced Institute of Science and Technology (KAIST); Daejeon South Korea
| |
Collapse
|