1
|
Wang S, Wang L, Cao Y, Deng Z, Ye C, Wang R, Zhu Y, Wei H. Self-supervised arbitrary-scale super-angular resolution diffusion MRI reconstruction. Med Phys 2025. [PMID: 39976309 DOI: 10.1002/mp.17691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2024] [Revised: 01/03/2025] [Accepted: 01/07/2025] [Indexed: 02/21/2025] Open
Abstract
BACKGROUND Diffusion magnetic resonance imaging (dMRI) is currently the unique noninvasive imaging technique to investigate the microstructure of in vivo tissues. To fully explore the complex tissue microstructure at sub-voxel scale, diffusion weighted (DW) images along many diffusion gradient directions are usually acquired, this is undoubtedly time consuming and inhibits their clinical applications. How to estimate the tissue microstructure only from DW images acquired with few diffusion directions remains a challenge. PURPOSE To address this challenge, we propose a self-supervised arbitrary scale super-angular resolution diffusion MRI reconstruction network (SARDI-nn), which can generate DW images along any directions from few acquisitions, allowing to overcome the limits of diffusion direction number on exploring the tissue microstructure. METHODS SARDI-nn is mainly composed of a diffusion direction-specific DW image feature extraction (DWFE) module and a physics-driven implicit expression and reconstruction (IRR) module. During training, dual downsampling operations are implemented. The first downsampling is used to produce the low-angular resolution (LAR) DW images; the second downsampling is for constructing input and learning target of SARDI-nn. The input LAR DW images pass through a DWFE module (composed of several residual blocks) to extract the feature representations of DW images along input directions, and then these features and the difference between the any querying diffusion direction and the input directions are input into a IRR module to derive the implicit representation and DW image along this query direction. Finally, based on the principle of dMRI, an adaptive weighting method is used to refine the DW image quality. During testing, given any diffusion directions, we can simply infer the corresponding DW images along these directions, accordingly, SARDI-nn can realize arbitrary scale angular super resolution. To test the effectiveness of the proposed method, we compare it with several existing methods in terms of peak signal to noise ratio (PSNR), structural similarity index measure (SSIM), and root mean square error (RMSE) of DW image and microstructure metrics derived from diffusion kurtosis imaging (DKI) and neurite orientation dispersion and density imaging (NODDI) models at different upsampling scales on Human Connectome Project (HCP) and several in-house datasets. RESULTS The comparison results demonstrate that our method achieves almost the best performance at all scales, with SSIM of reconstructed DW images improved by 10.04% at the upscale of 3 and 5.9% at the upscale of 15. Regarding the microstructures derived from DKI and NODDI models, when the upscale is not larger than 6, our method outperforms the best supervised learning method. In addition, the test results on external datasets show the well generality of our method. CONCLUSIONS SARDI-nn is currently the only method that can reconstruct high-angular resolution DW images with any upscales, which allows the variation of both input diffusion direction number and upscales, therefore, it can be easily extended to any unseen test datasets, not requiring to retrain the model. SARDI-nn provides a promising means for exploring the tissue microstructures from DW images along few diffusion gradient directions.
Collapse
Affiliation(s)
- Shuangxing Wang
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Lihui Wang
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Ying Cao
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Zeyu Deng
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Chen Ye
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Engineering Research Center of Text Computing & Cognitive Intelligence, Ministry of Education, State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang, China
| | - Rongpin Wang
- Key Laboratory of Advanced Medical Imaging and Intelligent Computing of Guizhou Province, Department of Radiology, Guizhou Provincial People's Hospital, Guiyang, China
| | - Yuemin Zhu
- University Lyon, INSA Lyon, CNRS, Inserm, IRP Metislab CREATIS UMR5220, U1206, Lyon, France
| | - Hongjiang Wei
- Institute for Medical Imaging Technology, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
2
|
Yu H, Ai L, Yao R, Li J. A hybrid network for fiber orientation distribution reconstruction employing multi-scale information. Med Phys 2025; 52:1019-1036. [PMID: 39565936 DOI: 10.1002/mp.17505] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 09/26/2024] [Accepted: 10/18/2024] [Indexed: 11/22/2024] Open
Abstract
BACKGROUND Accurate fiber orientation distribution (FOD) is crucial for resolving complex neural fiber structures. However, existing reconstruction methods often fail to integrate both global and local FOD information, as well as the directional information of fixels, which limits reconstruction accuracy. Additionally, these methods overlook the spatial positional relationships between voxels, resulting in extracted features that lack continuity. In regions with signal distortion, many methods also exhibit issues with reconstruction artifacts. PURPOSE This study addresses these challenges by introducing a new neural network called Fusion-Net. METHODS Fusion-Net comprises both the FOD reconstruction network and the peak direction estimation network. The FOD reconstruction network efficiently fuses the global and local features of the FOD, providing these features with spatial positional information through a competitive coordinate attention mechanism and a progressive updating mechanism, thus ensuring feature continuity. The peak direction estimation network redefines the task of estimating fixel peak directions as a multi-class classification problem. It uses a direction-aware loss function to supply directional information to the FOD reconstruction network. Additionally, we introduce a larger input scale for Fusion-Net to compensate for local signal distortion by incorporating more global information. RESULTS Experimental results demonstrate that the rich FOD features contribute to promising performance in Fusion-Net. The network effectively utilizes these features to enhance reconstruction accuracy while incorporating more global information, effectively mitigating the issue of local signal distortion. CONCLUSIONS This study demonstrates the feasibility of Fusion-Net for reconstructing FOD, providing reliable references for clinical applications.
Collapse
Affiliation(s)
- Hanyang Yu
- School of Computer Science, Shaanxi Normal University, Xi'an, China
| | - Lingmei Ai
- School of Computer Science, Shaanxi Normal University, Xi'an, China
| | - Ruoxia Yao
- School of Computer Science, Shaanxi Normal University, Xi'an, China
| | - Jiahao Li
- School of Computer Science, Shaanxi Normal University, Xi'an, China
| |
Collapse
|
3
|
Karimi D, Warfield SK. Diffusion MRI with Machine Learning. IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2024; 2:10.1162/imag_a_00353. [PMID: 40206511 PMCID: PMC11981007 DOI: 10.1162/imag_a_00353] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Diffusion-weighted magnetic resonance imaging (dMRI) of the brain offers unique capabilities including noninvasive probing of tissue microstructure and structural connectivity. It is widely used for clinical assessment of disease and injury, and for neuroscience research. Analyzing the dMRI data to extract useful information for medical and scientific purposes can be challenging. The dMRI measurements may suffer from strong noise and artifacts, and may exhibit high inter-session and inter-scanner variability in the data, as well as inter-subject heterogeneity in brain structure. Moreover, the relationship between measurements and the phenomena of interest can be highly complex. Recent years have witnessed increasing use of machine learning methods for dMRI analysis. This manuscript aims to assess these efforts, with a focus on methods that have addressed data preprocessing and harmonization, microstructure mapping, tractography, and white matter tract analysis. We study the main findings, strengths, and weaknesses of the existing methods and suggest topics for future research. We find that machine learning may be exceptionally suited to tackle some of the difficult tasks in dMRI analysis. However, for this to happen, several shortcomings of existing methods and critical unresolved issues need to be addressed. There is a pressing need to improve evaluation practices, to increase the availability of rich training datasets and validation benchmarks, as well as model generalizability, reliability, and explainability concerns.
Collapse
Affiliation(s)
- Davood Karimi
- Harvard Medical School and Boston Children’s Hospital, Boston, Massachusetts, USA
| | - Simon K. Warfield
- Harvard Medical School and Boston Children’s Hospital, Boston, Massachusetts, USA
| |
Collapse
|
4
|
Bartlett JJ, Davey CE, Johnston LA, Duan J. Recovering high-quality fiber orientation distributions from a reduced number of diffusion-weighted images using a model-driven deep learning architecture. Magn Reson Med 2024; 92:2193-2206. [PMID: 38852179 DOI: 10.1002/mrm.30187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 04/09/2024] [Accepted: 05/20/2024] [Indexed: 06/11/2024]
Abstract
PURPOSE The aim of this study was to develop a model-based deep learning architecture to accurately reconstruct fiber orientation distributions (FODs) from a reduced number of diffusion-weighted images (DWIs), facilitating accurate analysis with reduced acquisition times. METHODS Our proposed architecture, Spherical Deconvolution Network (SDNet), performed FOD reconstruction by mapping 30 DWIs to fully sampled FODs, which have been fit to 288 DWIs. SDNet included DWI-consistency blocks within the network architecture, and a fixel-classification penalty within the loss function. SDNet was trained on a subset of the Human Connectome Project, and its performance compared with FOD-Net, and multishell multitissue constrained spherical deconvolution. RESULTS SDNet achieved the strongest results with respect to angular correlation coefficient and sum of squared errors. When the impact of the fixel-classification penalty was increased, we observed an improvement in performance metrics reliant on segmenting the FODs into the correct number of fixels. CONCLUSION Inclusion of DWI-consistency blocks improved reconstruction performance, and the fixel-classification penalty term offered increased control over the angular separation of fixels in the reconstructed FODs.
Collapse
Affiliation(s)
- Joseph J Bartlett
- Department of Biomedical Engineering, Melbourne Brain Centre Imaging Unit, Graeme Clark Institute, The University of Melbourne, Parkville, Victoria, Australia
- School of Computer Science, University of Birmingham, Birmingham, UK
| | - Catherine E Davey
- Department of Biomedical Engineering, Melbourne Brain Centre Imaging Unit, Graeme Clark Institute, The University of Melbourne, Parkville, Victoria, Australia
| | - Leigh A Johnston
- Department of Biomedical Engineering, Melbourne Brain Centre Imaging Unit, Graeme Clark Institute, The University of Melbourne, Parkville, Victoria, Australia
| | - Jinming Duan
- School of Computer Science, University of Birmingham, Birmingham, UK
- The Alan Turing Institute, London, UK
| |
Collapse
|
5
|
Aguayo-González JF, Ehrlich-Lopez H, Concha L, Rivera M. Light-weight neural network for intra-voxel structure analysis. Front Neuroinform 2024; 18:1277050. [PMID: 39315001 PMCID: PMC11417038 DOI: 10.3389/fninf.2024.1277050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2023] [Accepted: 08/16/2024] [Indexed: 09/25/2024] Open
Abstract
We present a novel neural network-based method for analyzing intra-voxel structures, addressing critical challenges in diffusion-weighted MRI analysis for brain connectivity and development studies. The network architecture, called the Local Neighborhood Neural Network, is designed to use the spatial correlations of neighboring voxels for an enhanced inference while reducing parameter overhead. Our model exploits these relationships to improve the analysis of complex structures and noisy data environments. We adopt a self-supervised approach to address the lack of ground truth data, generating signals of voxel neighborhoods to integrate the training set. This eliminates the need for manual annotations and facilitates training under realistic conditions. Comparative analyses show that our method outperforms the constrained spherical deconvolution (CSD) method in quantitative and qualitative validations. Using phantom images that mimic in vivo data, our approach improves angular error, volume fraction estimation accuracy, and success rate. Furthermore, a qualitative comparison of the results in actual data shows a better spatial consistency of the proposed method in areas of real brain images. This approach demonstrates enhanced intra-voxel structure analysis capabilities and holds promise for broader application in various imaging scenarios.
Collapse
Affiliation(s)
| | | | - Luis Concha
- Department of Behavioral and Cognitive Neurobiology, Institute of Neurobiology, National Autonomous University of Mexico, Queretaro, Mexico
| | - Mariano Rivera
- Centro de Investigacion en Matematicas, Guanajuato, Mexico
| |
Collapse
|
6
|
Kebiri H, Gholipour A, Lin R, Vasung L, Calixto C, Krsnik Ž, Karimi D, Bach Cuadra M. Deep learning microstructure estimation of developing brains from diffusion MRI: A newborn and fetal study. Med Image Anal 2024; 95:103186. [PMID: 38701657 DOI: 10.1016/j.media.2024.103186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 02/06/2024] [Accepted: 04/22/2024] [Indexed: 05/05/2024]
Abstract
Diffusion-weighted magnetic resonance imaging (dMRI) is widely used to assess the brain white matter. Fiber orientation distribution functions (FODs) are a common way of representing the orientation and density of white matter fibers. However, with standard FOD computation methods, accurate estimation requires a large number of measurements that usually cannot be acquired for newborns and fetuses. We propose to overcome this limitation by using a deep learning method to map as few as six diffusion-weighted measurements to the target FOD. To train the model, we use the FODs computed using multi-shell high angular resolution measurements as target. Extensive quantitative evaluations show that the new deep learning method, using significantly fewer measurements, achieves comparable or superior results than standard methods such as Constrained Spherical Deconvolution and two state-of-the-art deep learning methods. For voxels with one and two fibers, respectively, our method shows an agreement rate in terms of the number of fibers of 77.5% and 22.2%, which is 3% and 5.4% higher than other deep learning methods, and an angular error of 10° and 20°, which is 6° and 5° lower than other deep learning methods. To determine baselines for assessing the performance of our method, we compute agreement metrics using densely sampled newborn data. Moreover, we demonstrate the generalizability of the new deep learning method across scanners, acquisition protocols, and anatomy on two clinical external datasets of newborns and fetuses. We validate fetal FODs, successfully estimated for the first time with deep learning, using post-mortem histological data. Our results show the advantage of deep learning in computing the fiber orientation density for the developing brain from in-vivo dMRI measurements that are often very limited due to constrained acquisition times. Our findings also highlight the intrinsic limitations of dMRI for probing the developing brain microstructure.
Collapse
Affiliation(s)
- Hamza Kebiri
- CIBM Center for Biomedical Imaging, Switzerland; Department of Radiology, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), Lausanne, Switzerland; Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, MA, USA.
| | - Ali Gholipour
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
| | - Rizhong Lin
- Department of Radiology, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), Lausanne, Switzerland; Signal Processing Laboratory 5 (LTS5), École Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
| | - Lana Vasung
- Department of Pediatrics, Boston Children's Hospital, and Harvard Medical School, Boston, MA, USA
| | - Camilo Calixto
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
| | - Željka Krsnik
- Croatian Institute for Brain Research, School of Medicine, University of Zagreb, Zagreb, Croatia
| | - Davood Karimi
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
| | - Meritxell Bach Cuadra
- CIBM Center for Biomedical Imaging, Switzerland; Department of Radiology, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), Lausanne, Switzerland
| |
Collapse
|
7
|
Li Z, Li Z, Bilgic B, Lee H, Ying K, Huang SY, Liao H, Tian Q. DIMOND: DIffusion Model OptimizatioN with Deep Learning. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2307965. [PMID: 38634608 PMCID: PMC11200022 DOI: 10.1002/advs.202307965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Revised: 02/09/2024] [Indexed: 04/19/2024]
Abstract
Diffusion magnetic resonance imaging is an important tool for mapping tissue microstructure and structural connectivity non-invasively in the in vivo human brain. Numerous diffusion signal models are proposed to quantify microstructural properties. Nonetheless, accurate estimation of model parameters is computationally expensive and impeded by image noise. Supervised deep learning-based estimation approaches exhibit efficiency and superior performance but require additional training data and may be not generalizable. A new DIffusion Model OptimizatioN framework using physics-informed and self-supervised Deep learning entitled "DIMOND" is proposed to address this problem. DIMOND employs a neural network to map input image data to model parameters and optimizes the network by minimizing the difference between the input acquired data and synthetic data generated via the diffusion model parametrized by network outputs. DIMOND produces accurate diffusion tensor imaging results and is generalizable across subjects and datasets. Moreover, DIMOND outperforms conventional methods for fitting sophisticated microstructural models including the kurtosis and NODDI model. Importantly, DIMOND reduces NODDI model fitting time from hours to minutes, or seconds by leveraging transfer learning. In summary, the self-supervised manner, high efficacy, and efficiency of DIMOND increase the practical feasibility and adoption of microstructure and connectivity mapping in clinical and neuroscientific applications.
Collapse
Affiliation(s)
- Zihan Li
- School of Biomedical EngineeringTsinghua UniversityBeijing100084P. R. China
| | - Ziyu Li
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical NeurosciencesUniversity of OxfordOxfordOX3 9DUUK
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMA02129USA
- Harvard Medical SchoolBostonMA02129USA
| | - Hong‐Hsi Lee
- Athinoula A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMA02129USA
- Harvard Medical SchoolBostonMA02129USA
| | - Kui Ying
- Department of Engineering PhysicsTsinghua UniversityBeijing100084P. R. China
| | - Susie Y. Huang
- Athinoula A. Martinos Center for Biomedical ImagingMassachusetts General HospitalCharlestownMA02129USA
- Harvard Medical SchoolBostonMA02129USA
| | - Hongen Liao
- School of Biomedical EngineeringTsinghua UniversityBeijing100084P. R. China
| | - Qiyuan Tian
- School of Biomedical EngineeringTsinghua UniversityBeijing100084P. R. China
| |
Collapse
|
8
|
Ewert C, Kügler D, Stirnberg R, Koch A, Yendiki A, Reuter M. Geometric deep learning for diffusion MRI signal reconstruction with continuous samplings (DISCUS). IMAGING NEUROSCIENCE (CAMBRIDGE, MASS.) 2024; 2:1-18. [PMID: 39575177 PMCID: PMC11576935 DOI: 10.1162/imag_a_00121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 01/09/2024] [Accepted: 01/30/2024] [Indexed: 11/24/2024]
Abstract
Diffusion-weighted magnetic resonance imaging (dMRI) permits a detailed in-vivo analysis of neuroanatomical microstructure, invaluable for clinical and population studies. However, many measurements with different diffusion-encoding directions and possibly b-values are necessary to infer the underlying tissue microstructure within different imaging voxels accurately. Two challenges particularly limit the utility of dMRI: long acquisition times limit feasible scans to only a few directional measurements, and the heterogeneity of acquisition schemes across studies makes it difficult to combine datasets. Left unaddressed by previous learning-based methods that only accept dMRI data adhering to the specific acquisition scheme used for training, there is a need for methods that accept and predict signals for arbitrary diffusion encodings. Addressing these challenges, we describe the first geometric deep learning method for continuous dMRI signal reconstruction for arbitrary diffusion sampling schemes for both the input and output. Our method combines the reconstruction accuracy and robustness of previous learning-based methods with the flexibility of model-based methods, for example, spherical harmonics or SHORE. We demonstrate that our method outperforms model-based methods and performs on par with discrete learning-based methods on single-, multi-shell, and grid-based diffusion MRI datasets. Relevant for dMRI-derived analyses, we show that our reconstruction translates to higher-quality estimates of frequently used microstructure models compared to other reconstruction methods, enabling high-quality analyses even from very short dMRI acquisitions.
Collapse
Affiliation(s)
- Christian Ewert
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - David Kügler
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | | | - Alexandra Koch
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
| | - Anastasia Yendiki
- A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Martin Reuter
- German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
- A. A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, United States
- Department of Radiology, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
9
|
Udayakumar P, Subhashini R. Connectome-based schizophrenia prediction using structural connectivity - Deep Graph Neural Network(sc-DGNN). JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:1041-1059. [PMID: 38820060 DOI: 10.3233/xst-230426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
BACKGROUND Connectome is understanding the complex organization of the human brain's structural and functional connectivity is essential for gaining insights into cognitive processes and disorders. OBJECTIVE To improve the prediction accuracy of brain disorder issues, the current study investigates dysconnected subnetworks and graph structures associated with schizophrenia. METHOD By using the proposed structural connectivity-deep graph neural network (sc-DGNN) model and compared with machine learning (ML) and deep learning (DL) models.This work attempts to focus on eighty-eight subjects of diffusion magnetic resonance imaging (dMRI), three classical ML, and five DL models. RESULT The structural connectivity-deep graph neural network (sc-DGNN) model is proposed to effectively predict dysconnectedness associated with schizophrenia and exhibits superior performance compared to traditional ML and DL (GNNs) methods in terms of accuracy, sensitivity, specificity, precision, F1-score, and Area under receiver operating characteristic (AUC). CONCLUSION The classification task on schizophrenia using structural connectivity matrices and experimental results showed that linear discriminant analysis (LDA) performed 72% accuracy rate in ML models and sc-DGNN performed at a 93% accuracy rate in DL models to distinguish between schizophrenia and healthy patients.
Collapse
Affiliation(s)
- P Udayakumar
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, India
| | - R Subhashini
- School of Computer Science Engineering and Information Systems, Vellore Institute of Technology, Vellore, India
| |
Collapse
|
10
|
Yao T, Rheault F, Cai LY, Nath V, Asad Z, Newlin N, Cui C, Deng R, Ramadass K, Shafer A, Resnick S, Schilling K, Landman BA, Huo Y. Robust fiber orientation distribution function estimation using deep constrained spherical deconvolution for diffusion-weighted magnetic resonance imaging. J Med Imaging (Bellingham) 2024; 11:014005. [PMID: 38188934 PMCID: PMC10768686 DOI: 10.1117/1.jmi.11.1.014005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 11/04/2023] [Accepted: 12/14/2023] [Indexed: 01/09/2024] Open
Abstract
Purpose Diffusion-weighted magnetic resonance imaging (DW-MRI) is a critical imaging method for capturing and modeling tissue microarchitecture at a millimeter scale. A common practice to model the measured DW-MRI signal is via fiber orientation distribution function (fODF). This function is the essential first step for the downstream tractography and connectivity analyses. With recent advantages in data sharing, large-scale multisite DW-MRI datasets are being made available for multisite studies. However, measurement variabilities (e.g., inter- and intrasite variability, hardware performance, and sequence design) are inevitable during the acquisition of DW-MRI. Most existing model-based methods [e.g., constrained spherical deconvolution (CSD)] and learning-based methods (e.g., deep learning) do not explicitly consider such variabilities in fODF modeling, which consequently leads to inferior performance on multisite and/or longitudinal diffusion studies. Approach In this paper, we propose a data-driven deep CSD method to explicitly constrain the scan-rescan variabilities for a more reproducible and robust estimation of brain microstructure from repeated DW-MRI scans. Specifically, the proposed method introduces a three-dimensional volumetric scanner-invariant regularization scheme during the fODF estimation. We study the Human Connectome Project (HCP) young adults test-retest group as well as the MASiVar dataset (with inter- and intrasite scan/rescan data). The Baltimore Longitudinal Study of Aging dataset is employed for external validation. Results From the experimental results, the proposed data-driven framework outperforms the existing benchmarks in repeated fODF estimation. By introducing the contrastive loss with scan/rescan data, the proposed method achieved a higher consistency while maintaining higher angular correlation coefficients with the CSD modeling. The proposed method is assessing the downstream connectivity analysis and shows increased performance in distinguishing subjects with different biomarkers. Conclusion We propose a deep CSD method to explicitly reduce the scan-rescan variabilities, so as to model a more reproducible and robust brain microstructure from repeated DW-MRI scans. The plug-and-play design of the proposed approach is potentially applicable to a wider range of data harmonization problems in neuroimaging.
Collapse
Affiliation(s)
- Tianyuan Yao
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Francois Rheault
- Université de Sherbrooke, Department of Computer Science, Sherbrooke, Québec, Canada
| | - Leon Y. Cai
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Vishwesh Nath
- NVIDIA Corporation, Bethesda, Maryland, United States
| | - Zuhayr Asad
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Nancy Newlin
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Can Cui
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Ruining Deng
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Karthik Ramadass
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Andrea Shafer
- National Institute on Aging, Laboratory of Behavioral Neuroscience, Baltimore, Maryland, United States
| | - Susan Resnick
- National Institute on Aging, Laboratory of Behavioral Neuroscience, Baltimore, Maryland, United States
| | - Kurt Schilling
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| |
Collapse
|
11
|
Chen Z, Peng C, Li Y, Zeng Q, Feng Y. Super-resolved q-space learning of diffusion MRI. Med Phys 2023; 50:7700-7713. [PMID: 37219814 DOI: 10.1002/mp.16478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 03/07/2023] [Accepted: 04/08/2023] [Indexed: 05/24/2023] Open
Abstract
BACKGROUND Diffusion magnetic resonance imaging (dMRI) provides a powerful tool to non-invasively investigate neural structures in the living human brain. Nevertheless, its reconstruction performance on neural structures relies on the number of diffusion gradients in the q-space. High-angular (HA) dMRI requires a long scan time, limiting its use in clinical practice, whereas directly reducing the number of diffusion gradients would lead to the underestimation of neural structures. PURPOSE We propose a deep compressive sensing-based q-space learning (DCS-qL) approach to estimate HA dMRI from low-angular dMRI. METHODS In DCS-qL, we design the deep network architecture by unfolding the proximal gradient descent procedure that addresses the compressive sense problem. In addition, we exploit a lifting scheme to design a network structure with reversible transform properties. For implementation, we apply a self-supervised regression to enhance the signal-to-noise ratio of diffusion data. Then, we utilize a semantic information-guided patch-based mapping strategy for feature extraction, which introduces multiple network branches to handle patches with different tissue labels. RESULTS Experimental results show that the proposed approach can yield a promising performance on the tasks of reconstructed HA dMRI images, microstructural indices of neurite orientation dispersion and density imaging, fiber orientation distribution, and fiber bundle estimation. CONCLUSIONS The proposed method achieves more accurate neural structures than competing approaches.
Collapse
Affiliation(s)
- Zan Chen
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Chenxu Peng
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Yongqiang Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Qingrun Zeng
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| | - Yuanjing Feng
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, China
| |
Collapse
|
12
|
Kebiri H, Gholipour A, Vasung L, Krsnik Ž, Karimi D, Cuadra MB. Deep learning microstructure estimation of developing brains from diffusion MRI: a newborn and fetal study. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.01.547351. [PMID: 37425859 PMCID: PMC10327173 DOI: 10.1101/2023.07.01.547351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Diffusion-weighted magnetic resonance imaging (dMRI) is widely used to assess the brain white matter. Fiber orientation distribution functions (FODs) are a common way of representing the orientation and density of white matter fibers. However, with standard FOD computation methods, accurate estimation of FODs requires a large number of measurements that usually cannot be acquired for newborns and fetuses. We propose to overcome this limitation by using a deep learning method to map as few as six diffusion-weighted measurements to the target FOD. To train the model, we use the FODs computed using multi-shell high angular resolution measurements as target. Extensive quantitative evaluations show that the new deep learning method, using significantly fewer measurements, achieves comparable or superior results to standard methods such as Constrained Spherical Deconvolution. We demonstrate the generalizability of the new deep learning method across scanners, acquisition protocols, and anatomy on two clinical datasets of newborns and fetuses. Additionally, we compute agreement metrics within the HARDI newborn dataset, and validate fetal FODs with post-mortem histological data. The results of this study show the advantage of deep learning in inferring the microstructure of the developing brain from in-vivo dMRI measurements that are often very limited due to subject motion and limited acquisition times, but also highlight the intrinsic limitations of dMRI in the analysis of the developing brain microstructure. These findings, therefore, advocate for the need for improved methods that are tailored to studying the early development of human brain.
Collapse
Affiliation(s)
- Hamza Kebiri
- CIBM Center for Biomedical Imaging, Switzerland
- Department of Radiology, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), Lausanne, Switzerland
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Ali Gholipour
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Lana Vasung
- Department of Pediatrics, Boston Children's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Željka Krsnik
- Croatian Institute for Brain Research, School of Medicine, University of Zagreb, Zagreb, Croatia
| | - Davood Karimi
- Computational Radiology Laboratory, Department of Radiology, Boston Children's Hospital and Harvard Medical School, Boston, Massachusetts, USA
| | - Meritxell Bach Cuadra
- CIBM Center for Biomedical Imaging, Switzerland
- Department of Radiology, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), Lausanne, Switzerland
| |
Collapse
|
13
|
Ghazi N, Aarabi MH, Soltanian-Zadeh H. Deep Learning Methods for Identification of White Matter Fiber Tracts: Review of State-of-the-Art and Future Prospective. Neuroinformatics 2023; 21:517-548. [PMID: 37328715 DOI: 10.1007/s12021-023-09636-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2023] [Indexed: 06/18/2023]
Abstract
Quantitative analysis of white matter fiber tracts from diffusion Magnetic Resonance Imaging (dMRI) data is of great significance in health and disease. For example, analysis of fiber tracts related to anatomically meaningful fiber bundles is highly demanded in pre-surgical and treatment planning, and the surgery outcome depends on accurate segmentation of the desired tracts. Currently, this process is mainly done through time-consuming manual identification performed by neuro-anatomical experts. However, there is a broad interest in automating the pipeline such that it is fast, accurate, and easy to apply in clinical settings and also eliminates the intra-reader variabilities. Following the advancements in medical image analysis using deep learning techniques, there has been a growing interest in using these techniques for the task of tract identification as well. Recent reports on this application show that deep learning-based tract identification approaches outperform existing state-of-the-art methods. This paper presents a review of current tract identification approaches based on deep neural networks. First, we review the recent deep learning methods for tract identification. Next, we compare them with respect to their performance, training process, and network properties. Finally, we end with a critical discussion of open challenges and possible directions for future works.
Collapse
Affiliation(s)
- Nayereh Ghazi
- Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14399, Iran
| | - Mohammad Hadi Aarabi
- Department of Neuroscience, University of Padova, Padova, Italy
- Padova Neuroscience Center (PNC), University of Padova, Padova, Italy
| | - Hamid Soltanian-Zadeh
- Control and Intelligent Processing Center of Excellence (CIPCE), School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, 14399, Iran.
- Medical Image Analysis Laboratory, Departments of Radiology and Research Administration, Henry Ford Health System, Detroit, MI, 48202, USA.
| |
Collapse
|
14
|
Faiyaz A, Doyley MM, Schifitto G, Uddin MN. Artificial intelligence for diffusion MRI-based tissue microstructure estimation in the human brain: an overview. Front Neurol 2023; 14:1168833. [PMID: 37153663 PMCID: PMC10160660 DOI: 10.3389/fneur.2023.1168833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 03/27/2023] [Indexed: 05/10/2023] Open
Abstract
Artificial intelligence (AI) has made significant advances in the field of diffusion magnetic resonance imaging (dMRI) and other neuroimaging modalities. These techniques have been applied to various areas such as image reconstruction, denoising, detecting and removing artifacts, segmentation, tissue microstructure modeling, brain connectivity analysis, and diagnosis support. State-of-the-art AI algorithms have the potential to leverage optimization techniques in dMRI to advance sensitivity and inference through biophysical models. While the use of AI in brain microstructures has the potential to revolutionize the way we study the brain and understand brain disorders, we need to be aware of the pitfalls and emerging best practices that can further advance this field. Additionally, since dMRI scans rely on sampling of the q-space geometry, it leaves room for creativity in data engineering in such a way that it maximizes the prior inference. Utilization of the inherent geometry has been shown to improve general inference quality and might be more reliable in identifying pathological differences. We acknowledge and classify AI-based approaches for dMRI using these unifying characteristics. This article also highlighted and reviewed general practices and pitfalls involving tissue microstructure estimation through data-driven techniques and provided directions for building on them.
Collapse
Affiliation(s)
- Abrar Faiyaz
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States
| | - Marvin M. Doyley
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States
- Department of Imaging Sciences, University of Rochester, Rochester, NY, United States
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
| | - Giovanni Schifitto
- Department of Electrical and Computer Engineering, University of Rochester, Rochester, NY, United States
- Department of Imaging Sciences, University of Rochester, Rochester, NY, United States
- Department of Neurology, University of Rochester, Rochester, NY, United States
| | - Md Nasir Uddin
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Neurology, University of Rochester, Rochester, NY, United States
| |
Collapse
|
15
|
Liang Z, Arefin TM, Lee CH, Zhang J. Using mesoscopic tract-tracing data to guide the estimation of fiber orientation distributions in the mouse brain from diffusion MRI. Neuroimage 2023; 270:119999. [PMID: 36871795 PMCID: PMC10052941 DOI: 10.1016/j.neuroimage.2023.119999] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Revised: 02/21/2023] [Accepted: 02/28/2023] [Indexed: 03/07/2023] Open
Abstract
Diffusion MRI (dMRI) tractography is the only tool for non-invasive mapping of macroscopic structural connectivity over the entire brain. Although it has been successfully used to reconstruct large white matter tracts in the human and animal brains, the sensitivity and specificity of dMRI tractography remained limited. In particular, the fiber orientation distributions (FODs) estimated from dMRI signals, key to tractography, may deviate from histologically measured fiber orientation in crossing fibers and gray matter regions. In this study, we demonstrated that a deep learning network, trained using mesoscopic tract-tracing data from the Allen Mouse Brain Connectivity Atlas, was able to improve the estimation of FODs from mouse brain dMRI data. Tractography results based on the network generated FODs showed improved specificity while maintaining sensitivity comparable to results based on FOD estimated using a conventional spherical deconvolution method. Our result is a proof-of-concept of how mesoscale tract-tracing data can guide dMRI tractography and enhance our ability to characterize brain connectivity.
Collapse
Affiliation(s)
- Zifei Liang
- Department of Radiology, Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, 660 First Ave, New York, NY 10016, USA
| | - Tanzil Mahmud Arefin
- Department of Radiology, Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, 660 First Ave, New York, NY 10016, USA
| | - Choong H Lee
- Department of Radiology, Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, 660 First Ave, New York, NY 10016, USA
| | - Jiangyang Zhang
- Department of Radiology, Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, 660 First Ave, New York, NY 10016, USA.
| |
Collapse
|
16
|
Sabidussi ER, Klein S, Jeurissen B, Poot DHJ. dtiRIM: A generalisable deep learning method for diffusion tensor imaging. Neuroimage 2023; 269:119900. [PMID: 36702213 DOI: 10.1016/j.neuroimage.2023.119900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 01/19/2023] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Diffusion weighted MRI is an indispensable tool for routine patient screening and diagnostics of pathology. Recently, several deep learning methods have been proposed to quantify diffusion parameters, but poor generalisation to new data prevents broader use of these methods, as they require retraining of the neural network for each new scan protocol. In this work, we present the dtiRIM, a new deep learning method for Diffusion Tensor Imaging (DTI) based on the Recurrent Inference Machines. Thanks to its ability to learn how to solve inverse problems and to use the diffusion tensor model to promote data consistency, the dtiRIM can generalise to variations in the acquisition settings. This enables a single trained network to produce high quality tensor estimates for a variety of cases. We performed extensive validation of our method using simulation and in vivo data, and compared it to the Iterated Weighted Linear Least Squares (IWLLS), the approach of the state-of-the-art MRTrix3 software, and to an implementation of the Maximum Likelihood Estimator (MLE). Our results show that dtiRIM predictions present low dependency on tissue properties, anatomy and scanning parameters, with results comparable to or better than both IWLLS and MLE. Further, we demonstrate that a single dtiRIM model can be used for a diversity of data sets without significant loss in quality, representing, to our knowledge, the first generalisable deep learning based solver for DTI.
Collapse
Affiliation(s)
- E R Sabidussi
- Erasmus MC University Medical Center, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands.
| | - S Klein
- Erasmus MC University Medical Center, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands
| | - B Jeurissen
- imec-Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium; Lab for Equilibrium Investigations and Aerospace, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - D H J Poot
- Erasmus MC University Medical Center, Department of Radiology and Nuclear Medicine, Rotterdam, the Netherlands
| |
Collapse
|
17
|
Jha RR, Kumar BVR, Pathak SK, Schneider W, Bhavsar A, Nigam A. Undersampled single-shell to MSMT fODF reconstruction using CNN-based ODE solver. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107339. [PMID: 36682110 DOI: 10.1016/j.cmpb.2023.107339] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Revised: 11/27/2022] [Accepted: 01/04/2023] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Diffusion MRI (dMRI) has been considered one of the most popular non-invasive techniques for studying the human brain's white matter (WM). dMRI is used to delineate the brain's microstructure by approximating the WM region's fiber tracts. The achieved fiber tracts can be utilized to assess mental diseases like Multiple sclerosis, ADHD, Seizures, Intellectual disability, and others. New techniques such as high angular resolution diffusion-weighted imaging (HARDI) have been developed, providing precise fiber directions, and overcoming the limitation of traditional DTI. Unlike Single-shell, Multi-shell HARDI provides tissue fractions for white matter, gray matter, and cerebrospinal fluid, resulting in a Multi-shell Multi-tissue fiber orientation distribution function (MSMT fODF). This MSMT fODF comes up with more precise fiber directions than a Single-shell, which helps to get correct fiber tracts. In addition, various multi-compartment diffusion models, including as CHARMED and NODDI, have been developed to describe the brain tissue microstructural information. This type of model requires multi-shell data to obtain more specific tissue microstructural information. However, a major concern with multi-shell is that it takes a longer scanning time restricting its use in clinical applications. In addition, most of the existing dMRI scanners with low gradient strengths commonly acquire a single b-value (shell) upto b=1000s/mm2 due to SNR (Signal-to-noise ratio) reasons and severe imaging artifacts. METHODS To address this issue, we propose a CNN-based ordinary differential equations solver for the reconstruction of MSMT fODF from under-sampled and fully sampled Single-shell (b=1000s/mm2) dMRI. The proposed architecture consists of CNN-based Adams-Bash-forth and Runge-Kutta modules along with two loss functions, including L1 and total variation. RESULTS We have shown quantitative results and visualization of fODF, fiber tracts, and structural connectivity for several brain regions on the publicly available HCP dataset. In addition, the obtained angular correlation coefficients for white matter and full brain are high, showing the proposed network's utility.Finally, we have also demonstrated the effect of noise by adjusting SNR from 5 to 50 and observed the network robustness. CONCLUSION We can conclude that our model can accurately predict MSMT fODF from under-sampled or fully sampled Single-shell dMRI volumes.
Collapse
Affiliation(s)
- Ranjeet Ranjan Jha
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India.
| | - B V Rathish Kumar
- Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, India
| | - Sudhir K Pathak
- Learning Research and Development Center, University of Pittsburgh, USA
| | - Walter Schneider
- Learning Research and Development Center, University of Pittsburgh, USA
| | - Arnav Bhavsar
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India
| | - Aditya Nigam
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India
| |
Collapse
|
18
|
Jha RR, Kumar BR, Pathak SK, Bhavsar A, Nigam A. TrGANet: Transforming 3T to 7T dMRI using Trapezoidal Rule and Graph based Attention Modules. Med Image Anal 2023; 87:102806. [PMID: 37030056 DOI: 10.1016/j.media.2023.102806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 01/12/2023] [Accepted: 03/30/2023] [Indexed: 04/03/2023]
Abstract
Diffusion MRI (dMRI) is a non-invasive tool for assessing the white matter region of the brain by approximating the fiber streamlines, structural connectivity, and estimation of microstructure. This modality can yield useful information for diagnosing several mental diseases as well as for surgical planning. The higher angular resolution diffusion imaging (HARDI) technique is helpful in obtaining more robust fiber tracts by getting a good approximation of regions where fibers cross. Moreover, HARDI is more sensitive to tissue changes and can accurately represent anatomical details in the human brain at higher magnetic strengths. In other words, magnetic strengths affect the quality of the image, and hence high magnetic strength has good tissue contrast with better spatial resolution. However, a higher magnetic strength scanner (like 7T) is costly and unaffordable to most hospitals. Hence, in this work, we have proposed a novel CNN architecture for the transformation of 3T to 7T dMRI. Additionally, we have also reconstructed the multi-shell multi-tissue fiber orientation distribution function (MSMT fODF) at 7T from single-shell 3T. The proposed architecture consists of a CNN-based ODE solver utilizing the Trapezoidal rule and graph-based attention layer alongwith L1 and total variation loss. Finally, the model has been validated on the HCP data set quantitatively and qualitatively.
Collapse
|
19
|
Shao M, Xing F, Carass A, Liang X, Zhuo J, Stone M, Woo J, Prince JL. Analysis of Tongue Muscle Strain During Speech From Multimodal Magnetic Resonance Imaging. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:513-526. [PMID: 36716389 PMCID: PMC10023187 DOI: 10.1044/2022_jslhr-22-00329] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 09/23/2022] [Accepted: 10/26/2022] [Indexed: 06/18/2023]
Abstract
PURPOSE Muscle groups within the tongue in healthy and diseased populations show different behaviors during speech. Visualizing and quantifying strain patterns of these muscle groups during tongue motion can provide insights into tongue motor control and adaptive behaviors of a patient. METHOD We present a pipeline to estimate the strain along the muscle fiber directions in the deforming tongue during speech production. A deep convolutional network estimates the crossing muscle fiber directions in the tongue using diffusion-weighted magnetic resonance imaging (MRI) data acquired at rest. A phase-based registration algorithm is used to estimate motion of the tongue muscles from tagged MRI acquired during speech. After transforming both muscle fiber directions and motion fields into a common atlas space, strain tensors are computed and projected onto the muscle fiber directions, forming so-called strains in the line of actions (SLAs) throughout the tongue. SLAs are then averaged over individual muscles that have been manually labeled in the atlas space using high-resolution T2-weighted MRI. Data were acquired, and this pipeline was run on a cohort of eight healthy controls and two glossectomy patients. RESULTS The crossing muscle fibers reconstructed by the deep network show orthogonal patterns. The strain analysis results demonstrate consistency of muscle behaviors among some healthy controls during speech production. The patients show irregular muscle patterns, and their tongue muscles tend to show more extension than the healthy controls. CONCLUSIONS The study showed visual evidence of correlation between two muscle groups during speech production. Patients tend to have different strain patterns compared to the controls. Analysis of variations in muscle strains can potentially help develop treatment strategies in oral diseases. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21957011.
Collapse
Affiliation(s)
- Muhan Shao
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD
| | - Fangxu Xing
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD
| | - Xiao Liang
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore
| | - Jiachen Zhuo
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore
| | - Maureen Stone
- Department of Neural and Pain Sciences and Department of Orthodontics, University of Maryland School of Dentistry, Baltimore
| | - Jonghye Woo
- Gordon Center for Medical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD
| |
Collapse
|
20
|
Liu S, Liu Y, Xu X, Chen R, Liang D, Jin Q, Liu H, Chen G, Zhu Y. Accelerated cardiac diffusion tensor imaging using deep neural network. Phys Med Biol 2023; 68. [PMID: 36595239 DOI: 10.1088/1361-6560/acaa86] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 12/09/2022] [Indexed: 12/14/2022]
Abstract
Cardiac diffusion tensor imaging (DTI) is a noninvasive method for measuring the microstructure of the myocardium. However, its long scan time significantly hinders its wide application. In this study, we developed a deep learning framework to obtain high-quality DTI parameter maps from six diffusion-weighted images (DWIs) by combining deep-learning-based image generation and tensor fitting, and named the new framework FG-Net. In contrast to frameworks explored in previous deep-learning-based fast DTI studies, FG-Net generates inter-directional DWIs from six input DWIs to supplement the loss information and improve estimation accuracy for DTI parameters. FG-Net was evaluated using two datasets ofex vivohuman hearts. The results showed that FG-Net can generate fractional anisotropy, mean diffusivity maps, and helix angle maps from only six raw DWIs, with a quantification error of less than 5%. FG-Net outperformed conventional tensor fitting and black-box network fitting in both qualitative and quantitative metrics. We also demonstrated that the proposed FG-Net can achieve highly accurate fractional anisotropy and helix angle maps in DWIs with differentb-values.
Collapse
Affiliation(s)
- Shaonan Liu
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China.,Department of Computer Science, Inner Mongolia University, Hohhot, People's Republic of China
| | - Yuanyuan Liu
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Xi Xu
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Rui Chen
- Department of Radiology, Guangdong Provincial People's Hospital Guangdong Academy of Medical Sciences, Guangzhou, People's Republic of China
| | - Dong Liang
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China
| | - Qiyu Jin
- Department of Mathematical Science, Inner Mongolia University, Hohhot, People's Republic of China
| | - Hui Liu
- Department of Radiology, Guangdong Provincial People's Hospital Guangdong Academy of Medical Sciences, Guangzhou, People's Republic of China
| | - Guoqing Chen
- Department of Mathematical Science, Inner Mongolia University, Hohhot, People's Republic of China
| | - Yanjie Zhu
- Paul C. Lauterbur Research Centre for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, People's Republic of China.,National Center for Applied Mathematics Shenzhen, Shenzhen, Guangdong, People's Republic of China
| |
Collapse
|
21
|
Karimi D, Gholipour A. Diffusion tensor estimation with transformer neural networks. Artif Intell Med 2022; 130:102330. [PMID: 35809969 PMCID: PMC9675900 DOI: 10.1016/j.artmed.2022.102330] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 03/23/2022] [Accepted: 05/29/2022] [Indexed: 11/02/2022]
Abstract
Diffusion tensor imaging (DTI) is a widely used method for studying brain white matter development and degeneration. However, standard DTI estimation methods depend on a large number of high-quality measurements. This would require long scan times and can be particularly difficult to achieve with certain patient populations such as neonates. Here, we propose a method that can accurately estimate the diffusion tensor from only six diffusion-weighted measurements. Our method achieves this by learning to exploit the relationships between the diffusion signals and tensors in neighboring voxels. Our model is based on transformer networks, which represent the state of the art in modeling the relationship between signals in a sequence. In particular, our model consists of two such networks. The first network estimates the diffusion tensor based on the diffusion signals in a neighborhood of voxels. The second network provides more accurate tensor estimations by learning the relationships between the diffusion signals as well as the tensors estimated by the first network in neighboring voxels. Our experiments with three datasets show that our proposed method achieves highly accurate estimations of the diffusion tensor and is significantly superior to three competing methods. Estimations produced by our method with six diffusion-weighted measurements are comparable with those of standard estimation methods with 30-88 diffusion-weighted measurements. Hence, our method promises shorter scan times and more reliable assessment of brain white matter, particularly in non-cooperative patients such as neonates and infants.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Radiology at Boston Children's Hospital, and Harvard Medical School, Boston, MA, USA.
| | - Ali Gholipour
- Department of Radiology at Boston Children's Hospital, and Harvard Medical School, Boston, MA, USA
| |
Collapse
|
22
|
Filipiak P, Shepherd T, Lin YC, Placantonakis DG, Boada FE, Baete SH. Performance of orientation distribution function-fingerprinting with a biophysical multicompartment diffusion model. Magn Reson Med 2022; 88:418-435. [PMID: 35225365 PMCID: PMC9142101 DOI: 10.1002/mrm.29208] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 01/31/2022] [Accepted: 02/07/2022] [Indexed: 11/11/2022]
Abstract
PURPOSE Orientation Distribution Function (ODF) peak finding methods typically fail to reconstruct fibers crossing at shallow angles below 40°, leading to errors in tractography. ODF-Fingerprinting (ODF-FP) with the biophysical multicompartment diffusion model allows for breaking this barrier. METHODS A randomized mechanism to generate a multidimensional ODF-dictionary that covers biologically plausible ranges of intra- and extra-axonal diffusivities and fraction volumes is introduced. This enables ODF-FP to address the high variability of brain tissue. The performance of the proposed approach is evaluated on both numerical simulations and a reconstruction of major fascicles from high- and low-resolution in vivo diffusion images. RESULTS ODF-FP with the suggested modifications correctly identifies fibers crossing at angles as shallow as 10 degrees in the simulated data. In vivo, our approach reaches 56% of true positives in determining fiber directions, resulting in visibly more accurate reconstruction of pyramidal tracts, arcuate fasciculus, and optic radiations than the state-of-the-art techniques. Moreover, the estimated diffusivity values and fraction volumes in corpus callosum conform with the values reported in the literature. CONCLUSION The modified ODF-FP outperforms commonly used fiber reconstruction methods at shallow angles, which improves deterministic tractography outcomes of major fascicles. In addition, the proposed approach allows for linearization of the microstructure parameters fitting problem.
Collapse
Affiliation(s)
- Patryk Filipiak
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, NYU Langone Health, New York, NY, USA
| | - Timothy Shepherd
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, NYU Langone Health, New York, NY, USA
| | - Ying-Chia Lin
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, NYU Langone Health, New York, NY, USA
| | - Dimitris G. Placantonakis
- Department of Neurosurgery, Perlmutter Cancer Center, Neuroscience Institute, Kimmel Center for Stem Cell Biology, NYU Langone Health, New York, NY, USA
| | - Fernando E. Boada
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, NYU Langone Health, New York, NY, USA
- Radiological Sciences Laboratory and Molecular Imaging Program at Stanford, Department of Radiology, Stanford University, Stanford, CA
| | - Steven H. Baete
- Center for Advanced Imaging Innovation and Research (CAIR), Department of Radiology, NYU Langone Health, New York, NY, USA
| |
Collapse
|
23
|
Zeng R, Lv J, Wang H, Zhou L, Barnett M, Calamante F, Wang C. FOD-Net: A deep learning method for fiber orientation distribution angular super resolution. Med Image Anal 2022; 79:102431. [PMID: 35397471 DOI: 10.1016/j.media.2022.102431] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 03/16/2022] [Accepted: 03/21/2022] [Indexed: 10/18/2022]
Abstract
Mapping the human connectome using fiber-tracking permits the study of brain connectivity and yields new insights into neuroscience. However, reliable connectome reconstruction using diffusion magnetic resonance imaging (dMRI) data acquired by widely available clinical protocols remains challenging, thus limiting the connectome/tractography clinical applications. Here we develop fiber orientation distribution (FOD) network (FOD-Net), a deep-learning-based framework for FOD angular super-resolution. Our method enhances the angular resolution of FOD images computed from common clinical-quality dMRI data, to obtain FODs with quality comparable to those produced from advanced research scanners. Super-resolved FOD images enable superior tractography and structural connectome reconstruction from clinical protocols. The method was trained and tested with high-quality data from the Human Connectome Project (HCP) and further validated with a local clinical 3.0T scanner as well as with another public available multicenter-multiscanner dataset. Using this method, we improve the angular resolution of FOD images acquired with typical single-shell low-angular-resolution dMRI data (e.g., 32 directions, b=1000s/mm2) to approximate the quality of FODs derived from time-consuming, multi-shell high-angular-resolution dMRI research protocols. We also demonstrate tractography improvement, removing spurious connections and bridging missing connections. We further demonstrate that connectomes reconstructed by super-resolved FODs achieve comparable results to those obtained with more advanced dMRI acquisition protocols, on both HCP and clinical 3.0T data. Advances in deep-learning approaches used in FOD-Net facilitate the generation of high quality tractography/connectome analysis from existing clinical MRI environments. Our code is freely available at https://github.com/ruizengalways/FOD-Net.
Collapse
Affiliation(s)
- Rui Zeng
- School of Biomedical Engineering, The University of Sydney, Sydney 2050, Australia; Brain and Mind Centre, The University of Sydney, Sydney 2050, Australia
| | - Jinglei Lv
- School of Biomedical Engineering, The University of Sydney, Sydney 2050, Australia; Brain and Mind Centre, The University of Sydney, Sydney 2050, Australia
| | - He Wang
- Institute of Science and Technology for Brain-inspired Intelligence, Fudan University, Shanghai, China; Human Phenome Institute, Fudan University, Shanghai, China
| | - Luping Zhou
- School of Computer Science, The University of Sydney, Sydney 2050, Australia
| | - Michael Barnett
- Brain and Mind Centre, The University of Sydney, Sydney 2050, Australia; Sydney Neuroimaging Analysis Centre, Sydney 2050, Australia
| | - Fernando Calamante
- School of Biomedical Engineering, The University of Sydney, Sydney 2050, Australia; Brain and Mind Centre, The University of Sydney, Sydney 2050, Australia; Sydney Imaging, The University of Sydney, Sydney 2050, Australia
| | - Chenyu Wang
- Brain and Mind Centre, The University of Sydney, Sydney 2050, Australia; Sydney Neuroimaging Analysis Centre, Sydney 2050, Australia.
| |
Collapse
|
24
|
Jha RR, Pathak SK, Nath V, Schneider W, Kumar BVR, Bhavsar A, Nigam A. VRfRNet: Volumetric ROI fODF reconstruction network for estimation of multi-tissue constrained spherical deconvolution with only single shell dMRI. Magn Reson Imaging 2022; 90:1-16. [PMID: 35341904 DOI: 10.1016/j.mri.2022.03.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 02/19/2022] [Accepted: 03/19/2022] [Indexed: 10/18/2022]
Abstract
Diffusion MRI (dMRI) is one of the most popular techniques for studying the brain structure, mainly the white matter region. Among several sampling methods in dMRI, the high angular resolution diffusion imaging (HARDI) technique has attracted researchers due to its more accurate fiber orientation estimation. However, the current single-shell HARDI makes the intravoxel structure challenging to estimate accurately. While multi-shell acquisition can address this problem, it takes a longer scanning time, restricting its use in clinical applications. In addition, most existing dMRI scanners with low gradient-strengths often acquire single-shell up to b=1000s/mm2 because of signal-to-noise ratio issues and severe image artefacts. Hence, we propose a novel generative adversarial network, VRfRNet, for the reconstruction of multi-shell multi-tissue fiber orientation distribution function from single-shell HARDI volumes. Such a transformation learning is performed in the spherical harmonics (SH) space, as raw input HARDI volume is transformed to SH coefficients to soften gradient directions. The proposed VRfRNet consists of several modules, such as multi-context feature enrichment module, feature level attention, and softmax level attention. In addition, three loss functions have been used to optimize network learning, including L1, adversarial, and total variation. The network is trained and tested using standard qualitative and quantitative performance metrics on the publicly available HCP data-set.
Collapse
Affiliation(s)
- Ranjeet Ranjan Jha
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India.
| | - Sudhir K Pathak
- Learning Research and Development Center, University of Pittsburgh, USA
| | - Vishwesh Nath
- Vanderbilt Institute for Surgery and Engineering, Nashville, Tennessee, USA
| | - Walter Schneider
- Learning Research and Development Center, University of Pittsburgh, USA
| | - B V Rathish Kumar
- Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, India
| | - Arnav Bhavsar
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India
| | - Aditya Nigam
- MANAS Lab, School of Computing and Electrical Engineering (SCEE), Indian Institute of Technology (IIT) Mandi, India
| |
Collapse
|
25
|
Tian Q, Li Z, Fan Q, Polimeni JR, Bilgic B, Salat DH, Huang SY. SDnDTI: Self-supervised deep learning-based denoising for diffusion tensor MRI. Neuroimage 2022; 253:119033. [PMID: 35240299 DOI: 10.1016/j.neuroimage.2022.119033] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 02/15/2022] [Accepted: 02/21/2022] [Indexed: 12/12/2022] Open
Abstract
Diffusion tensor magnetic resonance imaging (DTI) is a widely adopted neuroimaging method for the in vivo mapping of brain tissue microstructure and white matter tracts. Nonetheless, the noise in the diffusion-weighted images (DWIs) decreases the accuracy and precision of DTI derived microstructural parameters and leads to prolonged acquisition time for achieving improved signal-to-noise ratio (SNR). Deep learning-based image denoising using convolutional neural networks (CNNs) has superior performance but often requires additional high-SNR data for supervising the training of CNNs, which reduces the feasibility of supervised learning-based denoising in practice. In this work, we develop a self-supervised deep learning-based method entitled "SDnDTI" for denoising DTI data, which does not require additional high-SNR data for training. Specifically, SDnDTI divides multi-directional DTI data into many subsets of six DWI volumes and transforms DWIs from each subset to along the same diffusion-encoding directions through the diffusion tensor model, generating multiple repetitions of DWIs with identical image contrasts but different noise observations. SDnDTI removes noise by first denoising each repetition of DWIs using a deep 3-dimensional CNN with the average of all repetitions with higher SNR as the training target, following the same approach as normal supervised learning based denoising methods, and then averaging CNN-denoised images for achieving higher SNR. The denoising efficacy of SDnDTI is demonstrated in terms of the similarity of output images and resultant DTI metrics compared to the ground truth generated using substantially more DWI volumes on two datasets with different spatial resolutions, b-values and numbers of input DWI volumes provided by the Human Connectome Project (HCP) and the Lifespan HCP in Aging. The SDnDTI results preserve image sharpness and textural details and substantially improve upon those from the raw data. The results of SDnDTI are comparable to those from supervised learning-based denoising and outperform those from state-of-the-art conventional denoising algorithms including BM4D, AONLM and MPPCA. By leveraging domain knowledge of diffusion MRI physics, SDnDTI makes it easier to use CNN-based denoising methods in practice and has the potential to benefit a wider range of research and clinical applications that require accelerated DTI acquisition and high-quality DTI data for mapping of tissue microstructure, fiber tracts and structural connectivity in the living human brain.
Collapse
Affiliation(s)
- Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States.
| | - Ziyu Li
- Department of Biomedical Engineering, Tsinghua University, Beijing, PR China
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - David H Salat
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, 149 13th Street, Charlestown, MA 02129, United States; Department of Radiology, Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
26
|
Karimi D, Jaimes C, Machado-Rivas F, Vasung L, Khan S, Warfield SK, Gholipour A. Deep learning-based parameter estimation in fetal diffusion-weighted MRI. Neuroimage 2021; 243:118482. [PMID: 34455242 PMCID: PMC8573718 DOI: 10.1016/j.neuroimage.2021.118482] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2021] [Revised: 08/03/2021] [Accepted: 08/17/2021] [Indexed: 11/24/2022] Open
Abstract
Diffusion-weighted magnetic resonance imaging (DW-MRI) of fetal brain is challenged by frequent fetal motion and signal to noise ratio that is much lower than non-fetal imaging. As a result, accurate and robust parameter estimation in fetal DW-MRI remains an open problem. Recently, deep learning techniques have been successfully used for DW-MRI parameter estimation in non-fetal subjects. However, none of those prior works has addressed the fetal brain because obtaining reliable fetal training data is challenging. To address this problem, in this work we propose a novel methodology that utilizes fetal scans as well as scans from prematurely-born infants. High-quality newborn scans are used to estimate accurate maps of the parameter of interest. These parameter maps are then used to generate DW-MRI data that match the measurement scheme and noise distribution that are characteristic of fetal data. In order to demonstrate the effectiveness and reliability of the proposed data generation pipeline, we used the generated data to train a convolutional neural network (CNN) to estimate color fractional anisotropy (CFA). We evaluated the trained CNN on independent sets of fetal data in terms of reconstruction accuracy, precision, and expert assessment of reconstruction quality. Results showed significantly lower reconstruction error (n=100,p<0.001) and higher reconstruction precision (n=20,p<0.001) for the proposed machine learning pipeline compared with standard estimation methods. Expert assessments on 20 fetal test scans showed significantly better overall reconstruction quality (p<0.001) and more accurate reconstruction of 11 regions of interest (p<0.001) with the proposed method.
Collapse
Affiliation(s)
- Davood Karimi
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA.
| | - Camilo Jaimes
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA
| | - Fedel Machado-Rivas
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA
| | - Lana Vasung
- Department of Pediatrics at Boston Children's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Shadab Khan
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA
| | - Simon K Warfield
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA
| | - Ali Gholipour
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA
| |
Collapse
|
27
|
Karimi D, Vasung L, Jaimes C, Machado-Rivas F, Warfield SK, Gholipour A. Learning to estimate the fiber orientation distribution function from diffusion-weighted MRI. Neuroimage 2021; 239:118316. [PMID: 34182101 PMCID: PMC8385546 DOI: 10.1016/j.neuroimage.2021.118316] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 05/20/2021] [Accepted: 06/25/2021] [Indexed: 02/06/2023] Open
Abstract
Estimation of white matter fiber orientation distribution function (fODF) is the essential first step for reliable brain tractography and connectivity analysis. Most of the existing fODF estimation methods rely on sub-optimal physical models of the diffusion signal or mathematical simplifications, which can impact the estimation accuracy. In this paper, we propose a data-driven method that avoids some of these pitfalls. Our proposed method is based on a multilayer perceptron that learns to map the diffusion-weighted measurements, interpolated onto a fixed spherical grid in the q space, to the target fODF. Importantly, we also propose methods for synthesizing reliable simulated training data. We show that the model can be effectively trained with simulated or real training data. Our phantom experiments show that the proposed method results in more accurate fODF estimation and tractography than several competing methods including the multi-tensor model, Bayesian estimation, spherical deconvolution, and two other machine learning techniques. On real data, we compare our method with other techniques in terms of accuracy of estimating the ground-truth fODF. The results show that our method is more accurate than other methods, and that it performs better than the competing methods when applied to under-sampled diffusion measurements. We also compare our method with the Sparse Fascicle Model in terms of expert ratings of the accuracy of reconstruction of several commissural, projection, association, and cerebellar tracts. The results show that the tracts reconstructed with the proposed method are rated significantly higher by three independent experts. Our study demonstrates the potential of data-driven methods for improving the accuracy and robustness of fODF estimation.
Collapse
Affiliation(s)
- Davood Karimi
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA.
| | - Lana Vasung
- Department of Pediatrics, Boston Children's Hospital, and Harvard Medical School, Boston, MA, USA
| | - Camilo Jaimes
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA
| | - Fedel Machado-Rivas
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA
| | - Simon K Warfield
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA
| | - Ali Gholipour
- Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA
| |
Collapse
|
28
|
Ren M, Kim H, Dey N, Gerig G. Q-space Conditioned Translation Networks for Directional Synthesis of Diffusion Weighted Images from Multi-modal Structural MRI. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2021; 12907:530-540. [PMID: 36383495 PMCID: PMC9662206 DOI: 10.1007/978-3-030-87234-2_50] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Current deep learning approaches for diffusion MRI modeling circumvent the need for densely-sampled diffusion-weighted images (DWIs) by directly predicting microstructural indices from sparsely-sampled DWIs. However, they implicitly make unrealistic assumptions of static q-space sampling during training and reconstruction. Further, such approaches can restrict downstream usage of variably sampled DWIs for usages including the estimation of microstructural indices or tractography. We propose a generative adversarial translation framework for high-quality DWI synthesis with arbitrary q-space sampling given commonly acquired structural images (e.g., B0, T1, T2). Our translation network linearly modulates its internal representations conditioned on continuous q-space information, thus removing the need for fixed sampling schemes. Moreover, this approach enables downstream estimation of high-quality microstructural maps from arbitrarily subsampled DWIs, which may be particularly important in cases with sparsely sampled DWIs. Across several recent methodologies, the proposed approach yields improved DWI synthesis accuracy and fidelity with enhanced downstream utility as quantified by the accuracy of scalar microstructure indices estimated from the synthesized images. Code is available at https://github.com/mengweiren/q-space-conditioned-dwi-synthesis.
Collapse
Affiliation(s)
- Mengwei Ren
- Department of Computer Science and Engineering, New York University, New York, NY, USA
| | - Heejong Kim
- Department of Computer Science and Engineering, New York University, New York, NY, USA
| | - Neel Dey
- Department of Computer Science and Engineering, New York University, New York, NY, USA
| | - Guido Gerig
- Department of Computer Science and Engineering, New York University, New York, NY, USA
| |
Collapse
|
29
|
Karimi D, Vasung L, Jaimes C, Machado-Rivas F, Khan S, Warfield SK, Gholipour A. A machine learning-based method for estimating the number and orientations of major fascicles in diffusion-weighted magnetic resonance imaging. Med Image Anal 2021; 72:102129. [PMID: 34182203 PMCID: PMC8320341 DOI: 10.1016/j.media.2021.102129] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2020] [Revised: 05/26/2021] [Accepted: 05/28/2021] [Indexed: 12/29/2022]
Abstract
Accurate modeling of diffusion-weighted magnetic resonance imaging measurements is necessary for accurate brain connectivity analysis. Existing methods for estimating the number and orientations of fascicles in an imaging voxel either depend on non-convex optimization techniques that are sensitive to initialization and measurement noise, or are prone to predicting spurious fascicles. In this paper, we propose a machine learning-based technique that can accurately estimate the number and orientations of fascicles in a voxel. Our method can be trained with either simulated or real diffusion-weighted imaging data. Our method estimates the angle to the closest fascicle for each direction in a set of discrete directions uniformly spread on the unit sphere. This information is then processed to extract the number and orientations of fascicles in a voxel. On realistic simulated phantom data with known ground truth, our method predicts the number and orientations of crossing fascicles more accurately than several classical and machine learning methods. It also leads to more accurate tractography. On real data, our method is better than or compares favorably with other methods in terms of robustness to measurement down-sampling and also in terms of expert quality assessment of tractography results.
Collapse
Affiliation(s)
- Davood Karimi
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Lana Vasung
- Department of Pediatrics at Boston Children's Hospital, and Harvard Medical School, Boston, Massachusetts, USA
| | - Camilo Jaimes
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Fedel Machado-Rivas
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Shadab Khan
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Simon K Warfield
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Ali Gholipour
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
30
|
Li H, Liang Z, Zhang C, Liu R, Li J, Zhang W, Liang D, Shen B, Zhang X, Ge Y, Zhang J, Ying L. SuperDTI: Ultrafast DTI and fiber tractography with deep learning. Magn Reson Med 2021; 86:3334-3347. [PMID: 34309073 DOI: 10.1002/mrm.28937] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2021] [Revised: 06/04/2021] [Accepted: 07/04/2021] [Indexed: 12/16/2022]
Abstract
PURPOSE To develop a deep learning-based reconstruction framework for ultrafast and robust diffusion tensor imaging and fiber tractography. METHODS SuperDTI was developed to learn the nonlinear relationship between DWIs and the corresponding diffusion tensor parameter maps. It bypasses the tensor fitting procedure, which is highly susceptible to noises and motions in DWIs. The network was trained and tested using data sets from the Human Connectome Project and patients with ischemic stroke. Results from SuperDTI were compared against widely used methods for tensor parameter estimation and fiber tracking. RESULTS Using training and testing data acquired using the same protocol and scanner, SuperDTI was shown to generate fractional anisotropy and mean diffusivity maps, as well as fiber tractography, from as few as six raw DWIs, with a quantification error of less than 5% in all white-matter and gray-matter regions of interest. It was robust to noises and motions in the testing data. Furthermore, the network trained using healthy volunteer data showed no apparent reduction in lesion detectability when directly applied to stroke patient data. CONCLUSIONS Our results demonstrate the feasibility of superfast DTI and fiber tractography using deep learning with as few as six DWIs directly, bypassing tensor fitting. Such a significant reduction in scan time may allow the inclusion of DTI into the clinical routine for many potential applications.
Collapse
Affiliation(s)
- Hongyu Li
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, New York, USA
| | - Zifei Liang
- Center for Biomedical Imaging, Radiology, New York University School of Medicine, New York, USA
| | - Chaoyi Zhang
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, New York, USA
| | - Ruiying Liu
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, New York, USA
| | - Jing Li
- Radiology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Weihong Zhang
- Radiology, Peking Union Medical College Hospital, Peking Union Medical College and Chinese Academy of Medical Sciences, Beijing, China
| | - Dong Liang
- Paul C. Lauterbur Research Center for Biomedical Imaging, Medical AI Research Center, SIAT, CAS, Shenzhen, China
| | - Bowen Shen
- Computer Science, Virginia Tech, Blacksburg, Virginia, USA
| | - Xiaoliang Zhang
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, New York, USA
| | - Yulin Ge
- Center for Biomedical Imaging, Radiology, New York University School of Medicine, New York, USA
| | - Jiangyang Zhang
- Center for Biomedical Imaging, Radiology, New York University School of Medicine, New York, USA
| | - Leslie Ying
- Electrical Engineering, University at Buffalo, State University of New York, Buffalo, New York, USA.,Biomedical Engineering, University at Buffalo, State University at New York, Buffalo, New York, USA
| |
Collapse
|
31
|
Lucena O, Vos SB, Vakharia V, Duncan J, Ashkan K, Sparks R, Ourselin S. Enhancing the estimation of fiber orientation distributions using convolutional neural networks. Comput Biol Med 2021; 135:104643. [PMID: 34280774 DOI: 10.1016/j.compbiomed.2021.104643] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 07/07/2021] [Accepted: 07/07/2021] [Indexed: 11/17/2022]
Abstract
Local fiber orientation distributions (FODs) can be computed from diffusion magnetic resonance imaging (dMRI). The accuracy and ability of FODs to resolve complex fiber configurations benefits from acquisition protocols that sample a high number of gradient directions, a high maximum b-value, and multiple b-values. However, acquisition time and scanners that follow these standards are limited in clinical settings, often resulting in dMRI acquired at a single shell (single b-value). In this work, we learn improved FODs from clinically acquired dMRI. We evaluate patch-based 3D convolutional neural networks (CNNs) on their ability to regress multi-shell FODs from single-shell FODs, using constrained spherical deconvolution (CSD). We evaluate U-Net and High-Resolution Network (HighResNet) 3D CNN architectures on data from the Human Connectome Project and an in-house dataset. We evaluate how well each CNN can resolve FODs 1) when training and testing on datasets with the same dMRI acquisition protocol; 2) when testing on a dataset with a different dMRI acquisition protocol than used to train the CNN; and 3) when testing on a dataset with a fewer number of gradient directions than used to train the CNN. This work is a step towards more accurate FOD estimation in time- and resource-limited clinical environments.
Collapse
Affiliation(s)
- Oeslle Lucena
- School of Biomedical Engineering and Imaging Sciences, King's College London, UK.
| | - Sjoerd B Vos
- Centre for Medical Image Computing, Department of Computer Sciences, University College London, London, UK; Neuroradiological Academic Unit, University College London Queen Square Institute of Neurology, University College London, London, UK
| | - Vejay Vakharia
- Department of Clinical and Experimental Epilepsy, University College London, UK
| | - John Duncan
- Department of Clinical and Experimental Epilepsy, University College London, UK; National Hospital for Neurology and Neurosurgery, Queen Square, UK
| | | | - Rachel Sparks
- School of Biomedical Engineering and Imaging Sciences, King's College London, UK
| | - Sebastien Ourselin
- School of Biomedical Engineering and Imaging Sciences, King's College London, UK
| |
Collapse
|
32
|
Elaldi A, Dey N, Kim H, Gerig G. Equivariant Spherical Deconvolution: Learning Sparse Orientation Distribution Functions from Spherical Data. INFORMATION PROCESSING IN MEDICAL IMAGING : PROCEEDINGS OF THE ... CONFERENCE 2021; 12729:267-278. [PMID: 37576905 PMCID: PMC10422024 DOI: 10.1007/978-3-030-78191-0_21] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2023]
Abstract
We present a rotation-equivariant self-supervised learning framework for the sparse deconvolution of non-negative scalar fields on the unit sphere. Spherical signals with multiple peaks naturally arise in Diffusion MRI (dMRI), where each voxel consists of one or more signal sources corresponding to anisotropic tissue structure such as white matter. Due to spatial and spectral partial voluming, clinically-feasible dMRI struggles to resolve crossing-fiber white matter configurations, leading to extensive development in spherical deconvolution methodology to recover underlying fiber directions. However, these methods are typically linear and struggle with small crossing-angles and partial volume fraction estimation. In this work, we improve on current methodologies by nonlinearly estimating fiber structures via self-supervised spherical convolutional networks with guaranteed equivariance to spherical rotation. We perform validation via extensive single and multi-shell synthetic benchmarks demonstrating competitive performance against common base-lines. We further show improved downstream performance on fiber tractography measures on the Tractometer benchmark dataset. Finally, we show downstream improvements in terms of tractography and partial volume estimation on a multi-shell dataset of human subjects.
Collapse
Affiliation(s)
- Axel Elaldi
- Department of Computer Science and Engineering, New York University, New York, USA
| | - Neel Dey
- Department of Computer Science and Engineering, New York University, New York, USA
| | - Heejong Kim
- Department of Computer Science and Engineering, New York University, New York, USA
| | - Guido Gerig
- Department of Computer Science and Engineering, New York University, New York, USA
| |
Collapse
|
33
|
Gong T, Tong Q, Li Z, He H, Zhang H, Zhong J. Deep learning-based method for reducing residual motion effects in diffusion parameter estimation. Magn Reson Med 2020; 85:2278-2293. [PMID: 33058279 DOI: 10.1002/mrm.28544] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2020] [Revised: 09/14/2020] [Accepted: 09/15/2020] [Indexed: 11/08/2022]
Abstract
PURPOSE Conventional motion-correction techniques for diffusion MRI can introduce motion-level-dependent bias in derived metrics. To address this challenge, a deep learning-based technique was developed to minimize such residual motion effects. METHODS The data-rejection approach was adopted in which motion-corrupted data are discarded before model-fitting. A deep learning-based parameter estimation algorithm, using a hierarchical convolutional neural network (H-CNN), was combined with motion assessment and corrupted volume rejection. The method was designed to overcome the limitations of existing methods of this kind that produce parameter estimations whose quality depends strongly on a proportion of the data discarded. Evaluation experiments were conducted for the estimation of diffusion kurtosis and diffusion-tensor-derived measures at both the individual and group levels. The performance was compared with the robust approach of iteratively reweighted linear least squares (IRLLS) after motion correction with and without outlier replacement. RESULTS Compared with IRLLS, the H-CNN-based technique is minimally sensitive to motion effects. It was tested at severe motion levels when 70% to 90% of the data are rejected and when random motion is present. The technique had a stable performance independent of the numbers and schemes of data rejection. A further test on a data set from children with attention-deficit hyperactivity disorder shows the technique can potentially ameliorate spurious group-level difference caused by head motion. CONCLUSION This method shows great potential for reducing residual motion effects in motion-corrupted diffusion-weighted-imaging data, bringing benefits that include reduced bias in derived metrics in individual scans and reduced motion-level-dependent bias in population studies employing diffusion MRI.
Collapse
Affiliation(s)
- Ting Gong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China.,Department of Computer Science & Centre for Medical Image Computing, University College London, London, UK
| | - Qiqi Tong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China
| | - Zhiwei Li
- Department of Instrument Science & Technology, Zhejiang University, Hangzhou, China
| | - Hongjian He
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China
| | - Hui Zhang
- Department of Computer Science & Centre for Medical Image Computing, University College London, London, UK
| | - Jianhui Zhong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China.,Department of Imaging Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
34
|
Tian Q, Bilgic B, Fan Q, Liao C, Ngamsombat C, Hu Y, Witzel T, Setsompop K, Polimeni JR, Huang SY. DeepDTI: High-fidelity six-direction diffusion tensor imaging using deep learning. Neuroimage 2020; 219:117017. [PMID: 32504817 PMCID: PMC7646449 DOI: 10.1016/j.neuroimage.2020.117017] [Citation(s) in RCA: 53] [Impact Index Per Article: 10.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2020] [Revised: 05/15/2020] [Accepted: 06/02/2020] [Indexed: 12/14/2022] Open
Abstract
Diffusion tensor magnetic resonance imaging (DTI) is unsurpassed in its ability to map tissue microstructure and structural connectivity in the living human brain. Nonetheless, the angular sampling requirement for DTI leads to long scan times and poses a critical barrier to performing high-quality DTI in routine clinical practice and large-scale research studies. In this work we present a new processing framework for DTI entitled DeepDTI that minimizes the data requirement of DTI to six diffusion-weighted images (DWIs) required by conventional voxel-wise fitting methods for deriving the six unique unknowns in a diffusion tensor using data-driven supervised deep learning. DeepDTI maps the input non-diffusion-weighted (b = 0) image and six DWI volumes sampled along optimized diffusion-encoding directions, along with T1-weighted and T2-weighted image volumes, to the residuals between the input and high-quality output b = 0 image and DWI volumes using a 10-layer three-dimensional convolutional neural network (CNN). The inputs and outputs of DeepDTI are uniquely formulated, which not only enables residual learning to boost CNN performance but also enables tensor fitting of resultant high-quality DWIs to generate orientational DTI metrics for tractography. The very deep CNN used by DeepDTI leverages the redundancy in local and non-local spatial information and across diffusion-encoding directions and image contrasts in the data. The performance of DeepDTI was systematically quantified in terms of the quality of the output images, DTI metrics, DTI-based tractography and tract-specific analysis results. We demonstrate rotationally-invariant and robust estimation of DTI metrics from DeepDTI that are comparable to those obtained with two b = 0 images and 21 DWIs for the primary eigenvector derived from DTI and two b = 0 images and 26-30 DWIs for various scalar metrics derived from DTI, achieving 3.3-4.6 × acceleration, and twice as good as those of a state-of-the-art denoising algorithm at the group level. The twenty major white-matter tracts can be accurately identified from the tractography of DeepDTI results. The mean distance between the core of the major white-matter tracts identified from DeepDTI results and those from the ground-truth results using 18 b = 0 images and 90 DWIs measures around 1-1.5 mm. DeepDTI leverages domain knowledge of diffusion MRI physics and power of deep learning to render DTI, DTI-based tractography, major white-matter tracts identification and tract-specific analysis more feasible for a wider range of neuroscientific and clinical studies.
Collapse
Affiliation(s)
- Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States.
| | - Berkin Bilgic
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Qiuyun Fan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Congyu Liao
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Chanon Ngamsombat
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Department of Radiology, Faculty of Medicine, Siriraj Hospital, Mahidol University, Thailand
| | - Yuxin Hu
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States
| | - Thomas Witzel
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States
| | - Kawin Setsompop
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Susie Y Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, United States; Harvard Medical School, Boston, MA, United States; Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
35
|
Sagawa H, Fushimi Y, Nakajima S, Fujimoto K, Miyake KK, Numamoto H, Koizumi K, Nambu M, Kataoka H, Nakamoto Y, Saga T. Deep Learning-based Noise Reduction for Fast Volume Diffusion Tensor Imaging: Assessing the Noise Reduction Effect and Reliability of Diffusion Metrics. Magn Reson Med Sci 2020; 20:450-456. [PMID: 32963184 PMCID: PMC8922344 DOI: 10.2463/mrms.tn.2020-0061] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
Abstract
To assess the feasibility of a denoising approach with deep learning-based reconstruction (dDLR) for fast volume simultaneous multi-slice diffusion tensor imaging of the brain, noise reduction effects and the reliability of diffusion metrics were evaluated with 20 patients. Image noise was significantly decreased with dDLR. Although fractional anisotropy (FA) of deep gray matter was overestimated when the number of image acquisitions was one (NAQ1), FA in NAQ1 with dDLR became closer to that in NAQ5.
Collapse
Affiliation(s)
- Hajime Sagawa
- Division of Clinical Radiology Service, Kyoto University Hospital
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University
| | - Satoshi Nakajima
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University
| | - Koji Fujimoto
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University
| | - Kanae Kawai Miyake
- Department of Advanced Medical Imaging Research, Graduate School of Medicine, Kyoto University
| | - Hitomi Numamoto
- Department of Advanced Medical Imaging Research, Graduate School of Medicine, Kyoto University
| | - Koji Koizumi
- Division of Clinical Radiology Service, Kyoto University Hospital
| | | | - Hiroharu Kataoka
- Department of Neurosurgery, Graduate School of Medicine, Kyoto University
| | - Yuji Nakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University
| | - Tsuneo Saga
- Department of Advanced Medical Imaging Research, Graduate School of Medicine, Kyoto University
| |
Collapse
|
36
|
Chen L, Xia C, Sun H. Recent advances of deep learning in psychiatric disorders. PRECISION CLINICAL MEDICINE 2020; 3:202-213. [PMID: 35694413 PMCID: PMC8982596 DOI: 10.1093/pcmedi/pbaa029] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 08/24/2020] [Accepted: 08/25/2020] [Indexed: 02/05/2023] Open
Abstract
Deep learning (DL) is a recently proposed subset of machine learning methods that has gained extensive attention in the academic world, breaking benchmark records in areas such as visual recognition and natural language processing. Different from conventional machine learning algorithm, DL is able to learn useful representations and features directly from raw data through hierarchical nonlinear transformations. Because of its ability to detect abstract and complex patterns, DL has been used in neuroimaging studies of psychiatric disorders, which are characterized by subtle and diffuse alterations. Here, we provide a brief review of recent advances and associated challenges in neuroimaging studies of DL applied to psychiatric disorders. The results of these studies indicate that DL could be a powerful tool in assisting the diagnosis of psychiatric diseases. We conclude our review by clarifying the main promises and challenges of DL application in psychiatric disorders, and possible directions for future research.
Collapse
Affiliation(s)
- Lu Chen
- West China Medical Publishers, West China Hospital of Sichuan University, Chengdu 610041, China
| | - Chunchao Xia
- Department of Radiology, West China Hospital of Sichuan University, Chengdu 610041, China
| | - Huaiqiang Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu 610041, China
| |
Collapse
|
37
|
Tong Q, Gong T, He H, Wang Z, Yu W, Zhang J, Zhai L, Cui H, Meng X, Tax CWM, Zhong J. A deep learning-based method for improving reliability of multicenter diffusion kurtosis imaging with varied acquisition protocols. Magn Reson Imaging 2020; 73:31-44. [PMID: 32822818 DOI: 10.1016/j.mri.2020.08.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2020] [Revised: 07/13/2020] [Accepted: 08/14/2020] [Indexed: 01/02/2023]
Abstract
Multicenter magnetic resonance imaging is gaining more popularity in large-sample projects. Since both varying hardware and software across different centers cause unavoidable data heterogeneity across centers, its impact on reliability in study outcomes has also drawn much attention recently. One fundamental issue arises in how to derive model parameters reliably from image data of varying quality. This issue is even more challenging for advanced diffusion methods such as diffusion kurtosis imaging (DKI). Recently, deep learning-based methods have been demonstrated with their potential for robust and efficient computation of diffusion-derived measures. Inspired by these approaches, the current study specifically designed a framework based on a three-dimensional hierarchical convolutional neural network, to jointly reconstruct and harmonize DKI measures from multicenter acquisition to reformulate these to a state-of-the-art hardware using data from traveling subjects. The results from the harmonized data acquired with different protocols show that: 1) the inter-scanner variation of DKI measures within white matter was reduced by 51.5% in mean kurtosis, 65.9% in axial kurtosis, 53.7% in radial kurtosis, and 61.5% in kurtosis fractional anisotropy, respectively; 2) data reliability of each single scanner was enhanced and brought to the level of the reference scanner; and 3) the harmonization network was able to reconstruct reliable DKI values from high data variability. Overall the results demonstrate the feasibility of the proposed deep learning-based method for DKI harmonization and help to simplify the protocol setup procedure for multicenter scanners with different hardware and software configurations.
Collapse
Affiliation(s)
- Qiqi Tong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China; Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, Zhejiang, China.
| | - Ting Gong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Hongjian He
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China.
| | - Zheng Wang
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, State Key Laboratory of Neuroscience, CAS Key Laboratory of Primate Neurobiology, Chinese Academy of Sciences, Shanghai, China.
| | - Wenwen Yu
- Institute of Neuroscience, CAS Center for Excellence in Brain Science and Intelligence Technology, State Key Laboratory of Neuroscience, CAS Key Laboratory of Primate Neurobiology, Chinese Academy of Sciences, Shanghai, China.
| | - Jianjun Zhang
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Lihao Zhai
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Hongsheng Cui
- Department of Radiology, The Third Affiliated Hospital of Qiqihar Medical University, Qiqihar, Heilongjiang, China
| | - Xin Meng
- Department of Radiology, The Third Affiliated Hospital of Qiqihar Medical University, Qiqihar, Heilongjiang, China
| | - Chantal W M Tax
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Physics and Astronomy, Cardiff University, Cardiff, United Kingdom.
| | - Jianhui Zhong
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, Zhejiang, China; Department of Imaging Sciences, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
38
|
Moeller S, Pisharady Kumar P, Andersson J, Akcakaya M, Harel N, Ma RE, Wu X, Yacoub E, Lenglet C, Ugurbil K. Diffusion Imaging in the Post HCP Era. J Magn Reson Imaging 2020; 54:36-57. [PMID: 32562456 DOI: 10.1002/jmri.27247] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 05/11/2020] [Accepted: 05/13/2020] [Indexed: 02/06/2023] Open
Abstract
Diffusion imaging is a critical component in the pursuit of developing a better understanding of the human brain. Recent technical advances promise enabling the advancement in the quality of data that can be obtained. In this review the context for different approaches relative to the Human Connectome Project are compared. Significant new gains are anticipated from the use of high-performance head gradients. These gains can be particularly large when the high-performance gradients are employed together with ultrahigh magnetic fields. Transmit array designs are critical in realizing high accelerations in diffusion-weighted (d)MRI acquisitions, while maintaining large field of view (FOV) coverage, and several techniques for optimal signal-encoding are now available. Reconstruction and processing pipelines that precisely disentangle the acquired neuroanatomical information are established and provide the foundation for the application of deep learning in the advancement of dMRI for complex tissues. Level of Evidence: 3 Technical Efficacy Stage: Stage 3.
Collapse
Affiliation(s)
- Steen Moeller
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Pramod Pisharady Kumar
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Jesper Andersson
- Wellcome Centre for Integrative Neuroimaging, FMRIB, Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK
| | - Mehmet Akcakaya
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA.,Electrical and Computer Engineering, University of Minnesota, Minneapolis, Minnesota, USA
| | - Noam Harel
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Ruoyun Emily Ma
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Xiaoping Wu
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Essa Yacoub
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Christophe Lenglet
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research; Department of Radiology, University of Minnesota, Minneapolis, Minnesota, USA
| |
Collapse
|
39
|
Gong T, Tong Q, He H, Sun Y, Zhong J, Zhang H. MTE-NODDI: Multi-TE NODDI for disentangling non-T2-weighted signal fractions from compartment-specific T2 relaxation times. Neuroimage 2020; 217:116906. [PMID: 32387626 DOI: 10.1016/j.neuroimage.2020.116906] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Revised: 05/01/2020] [Accepted: 05/03/2020] [Indexed: 12/28/2022] Open
Abstract
Neurite orientation dispersion and density imaging (NODDI) has become a popular diffusion MRI technique for investigating microstructural alternations during brain development, maturation and aging in health and disease. However, the NODDI model of diffusion does not explicitly account for compartment-specific T2 relaxation and its model parameters are usually estimated from data acquired with a single echo time (TE). Thus, the NODDI-derived measures, such as the intra-neurite signal fraction, also known as the neurite density index, could be T2-weighted and TE-dependent. This may confound the interpretation of studies as one cannot disentangle differences in diffusion from those in T2 relaxation. To address this challenge, we propose a multi-TE NODDI (MTE-NODDI) technique, inspired by recent studies exploiting the synergy between diffusion and T2 relaxation. MTE-NODDI could give robust estimates of the non-T2-weighted signal fractions and compartment-specific T2 values, as demonstrated by both simulation and in vivo data experiments. Results showed that the estimated non-T2 weighted intra-neurite fraction and compartment-specific T2 values in white matter were consistent with previous studies. The T2-weighted intra-neurite fractions from the original NODDI were found to be overestimated compared to their non-T2-weighted estimates; the overestimation increases with TE, consistent with the reported intra-neurite T2 being larger than extra-neurite T2. Finally, the inclusion of the free water compartment reduces the estimation error in intra-neurite T2 in the presence of cerebrospinal fluid contamination. With the ability to disentangle non-T2-weighted signal fractions from compartment-specific T2 relaxation, MTE-NODDI could help improve the interpretability of future neuroimaging studies, especially those in brain development, maturation and aging.
Collapse
Affiliation(s)
- Ting Gong
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China; Department of Computer Science & Centre for Medical Image Computing, University College London, UK
| | - Qiqi Tong
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China
| | - Hongjian He
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China.
| | - Yi Sun
- MR Collaboration, Siemens Healthcare, Shanghai, China
| | - Jianhui Zhong
- Center for Brain Imaging Science and Technology, College of Biomedical Engineering and Instrumental Science, Zhejiang University, Hangzhou, China; Department of Imaging Sciences, University of Rochester, Rochester, NY, United States.
| | - Hui Zhang
- Department of Computer Science & Centre for Medical Image Computing, University College London, UK
| |
Collapse
|