1
|
Hu D, Li H, Liu H, Oguz I. Domain generalization for retinal vessel segmentation via Hessian-based vector field. Med Image Anal 2024; 95:103164. [PMID: 38615431 DOI: 10.1016/j.media.2024.103164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 03/27/2024] [Accepted: 04/02/2024] [Indexed: 04/16/2024]
Abstract
Blessed by vast amounts of data, learning-based methods have achieved remarkable performance in countless tasks in computer vision and medical image analysis. Although these deep models can simulate highly nonlinear mapping functions, they are not robust with regard to the domain shift of input data. This is a significant concern that impedes the large-scale deployment of deep models in medical images since they have inherent variation in data distribution due to the lack of imaging standardization. Therefore, researchers have explored many domain generalization (DG) methods to alleviate this problem. In this work, we introduce a Hessian-based vector field that can effectively model the tubular shape of vessels, which is an invariant feature for data across various distributions. The vector field serves as a good embedding feature to take advantage of the self-attention mechanism in a vision transformer. We design paralleled transformer blocks that stress the local features with different scales. Furthermore, we present a novel data augmentation method that introduces perturbations in image style while the vessel structure remains unchanged. In experiments conducted on public datasets of different modalities, we show that our model achieves superior generalizability compared with the existing algorithms. Our code and trained model are publicly available at https://github.com/MedICL-VU/Vector-Field-Transformer.
Collapse
Affiliation(s)
- Dewei Hu
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Hao Li
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA
| | - Han Liu
- Department of Computer Science, Vanderbilt University, Nashville, TN 37235, USA
| | - Ipek Oguz
- Department of Electrical and Computer Engineering, Vanderbilt University, Nashville, TN 37235, USA; Department of Computer Science, Vanderbilt University, Nashville, TN 37235, USA.
| |
Collapse
|
2
|
Lin J, Li Z, Zeng Y, Liu X, Li L, Jahanshad N, Ge X, Zhang D, Lu M, Liu M. Harmonizing three-dimensional MRI using pseudo-warping field guided GAN. Neuroimage 2024; 295:120635. [PMID: 38729542 DOI: 10.1016/j.neuroimage.2024.120635] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/15/2024] [Accepted: 05/02/2024] [Indexed: 05/12/2024] Open
Abstract
In pursuit of cultivating automated models for magnetic resonance imaging (MRI) to aid in diagnostics, an escalating demand for extensive, multisite, and heterogeneous brain imaging datasets has emerged. This potentially introduces biased outcomes when directly applied for subsequent analysis. Researchers have endeavored to address this issue by pursuing the harmonization of MRIs. However, most existing image-based harmonization methods for MRI are tailored for 2D slices, which may introduce inter-slice variations when they are combined into a 3D volume. In this study, we aim to resolve inconsistencies between slices by introducing a pseudo-warping field. This field is created randomly and utilized to transform a slice into an artificially warped subsequent slice. The objective of this pseudo-warping field is to ensure that generators can consistently harmonize adjacent slices to another domain, without being affected by the varying content present in different slices. Furthermore, we construct unsupervised spatial and recycle loss to enhance the spatial accuracy and slice-wise consistency across the 3D images. The results demonstrate that our model effectively mitigates inter-slice variations and successfully preserves the anatomical details of the images during the harmonization process. Compared to generative harmonization models that employ 3D operators, our model exhibits greater computational efficiency and flexibility.
Collapse
Affiliation(s)
- Jiaying Lin
- Department of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Zhuoshuo Li
- Department of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Youbing Zeng
- Department of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Xiaobo Liu
- Department of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| | - Liang Li
- Genuine Digital Technology Co., Ltd., Xi'an, China.
| | - Neda Jahanshad
- Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Los Angeles, CA, USA.
| | - Xinting Ge
- School of Information Science and Engineering, Shandong Normal University, Shandong 250358, China.
| | - Dan Zhang
- School of Cyber Science and Engineering, Ningbo University of Technology, Zhejiang 315211, China.
| | - Minhua Lu
- Department of Biomedical Engineering, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen 518060, China.
| | - Mengting Liu
- Department of Biomedical Engineering, Sun Yat-sen University, Shenzhen 518107, China.
| |
Collapse
|
3
|
Chattopadhyay T, Joshy NA, Ozarkar SS, Buwa KS, Feng Y, Laltoo E, Thomopoulos SI, Villalon-Reina JE, Joshi H, Venkatasubramanian G, John JP, Thompson PM. Brain Age Analysis and Dementia Classification using Convolutional Neural Networks trained on Diffusion MRI: Tests in Indian and North American Cohorts. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.04.578829. [PMID: 38370641 PMCID: PMC10871286 DOI: 10.1101/2024.02.04.578829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Deep learning models based on convolutional neural networks (CNNs) have been used to classify Alzheimer's disease or infer dementia severity from T1-weighted brain MRI scans. Here, we examine the value of adding diffusion-weighted MRI (dMRI) as an input to these models. Much research in this area focuses on specific datasets such as the Alzheimer's Disease Neuroimaging Initiative (ADNI), which assesses people of North American, largely European ancestry, so we examine how models trained on ADNI, generalize to a new population dataset from India (the NIMHANS cohort). We first benchmark our models by predicting 'brain age' - the task of predicting a person's chronological age from their MRI scan and proceed to AD classification. We also evaluate the benefit of using a 3D CycleGAN approach to harmonize the imaging datasets before training the CNN models. Our experiments show that classification performance improves after harmonization in most cases, as well as better performance for dMRI as input.
Collapse
|
4
|
Carass A, Greenman D, Dewey BE, Calabresi PA, Prince JL, Pham DL. Image harmonization improves consistency of intra-rater delineations of MS lesions in heterogeneous MRI. NEUROIMAGE. REPORTS 2024; 4:100195. [PMID: 38370461 PMCID: PMC10871705 DOI: 10.1016/j.ynirp.2024.100195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Clinical magnetic resonance images (MRIs) lack a standard intensity scale due to differences in scanner hardware and the pulse sequences used to acquire the images. When MRIs are used for quantification, as in the evaluation of white matter lesions (WMLs) in multiple sclerosis, this lack of intensity standardization becomes a critical problem affecting both the staging and tracking of the disease and its treatment. This paper presents a study of harmonization on WML segmentation consistency, which is evaluated using an object detection classification scheme that incorporates manual delineations from both the original and harmonized MRIs. A cohort of ten people scanned on two different imaging platforms was studied. An expert rater, blinded to the image source, manually delineated WMLs on images from both scanners before and after harmonization. It was found that there is closer agreement in both global and per-lesion WML volume and spatial distribution after harmonization, demonstrating the importance of image harmonization prior to the creation of manual delineations. These results could lead to better truth models in both the development and evaluation of automated lesion segmentation algorithms.
Collapse
Affiliation(s)
- Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Danielle Greenman
- Center for Neuroscience and Regenerative Medicine, The Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD 20817, USA
| | - Blake E. Dewey
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Peter A. Calabresi
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Dzung L. Pham
- Department of Radiology, Uniformed Services University of the Health Sciences, Bethesda, MD 20814, USA
| |
Collapse
|
5
|
Hognon C, Conze PH, Bourbonne V, Gallinato O, Colin T, Jaouen V, Visvikis D. Contrastive image adaptation for acquisition shift reduction in medical imaging. Artif Intell Med 2024; 148:102747. [PMID: 38325919 DOI: 10.1016/j.artmed.2023.102747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 10/21/2023] [Accepted: 12/10/2023] [Indexed: 02/09/2024]
Abstract
The domain shift, or acquisition shift in medical imaging, is responsible for potentially harmful differences between development and deployment conditions of medical image analysis techniques. There is a growing need in the community for advanced methods that could mitigate this issue better than conventional approaches. In this paper, we consider configurations in which we can expose a learning-based pixel level adaptor to a large variability of unlabeled images during its training, i.e. sufficient to span the acquisition shift expected during the training or testing of a downstream task model. We leverage the ability of convolutional architectures to efficiently learn domain-agnostic features and train a many-to-one unsupervised mapping between a source collection of heterogeneous images from multiple unknown domains subjected to the acquisition shift and a homogeneous subset of this source set of lower cardinality, potentially constituted of a single image. To this end, we propose a new cycle-free image-to-image architecture based on a combination of three loss functions : a contrastive PatchNCE loss, an adversarial loss and an edge preserving loss allowing for rich domain adaptation to the target image even under strong domain imbalance and low data regimes. Experiments support the interest of the proposed contrastive image adaptation approach for the regularization of downstream deep supervised segmentation and cross-modality synthesis models.
Collapse
Affiliation(s)
- Clément Hognon
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France; SOPHiA Genetics, Pessac, France
| | - Pierre-Henri Conze
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| | - Vincent Bourbonne
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| | | | | | - Vincent Jaouen
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France.
| | - Dimitris Visvikis
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| |
Collapse
|
6
|
An L, Zhang C, Wulan N, Zhang S, Chen P, Ji F, Ng KK, Chen C, Zhou JH, Thomas Yeo BT. DeepResBat: deep residual batch harmonization accounting for covariate distribution differences. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.18.574145. [PMID: 38293022 PMCID: PMC10827218 DOI: 10.1101/2024.01.18.574145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Pooling MRI data from multiple datasets requires harmonization to reduce undesired inter-site variabilities, while preserving effects of biological variables (or covariates). The popular harmonization approach ComBat uses a mixed effect regression framework that explicitly accounts for covariate distribution differences across datasets. There is also significant interest in developing harmonization approaches based on deep neural networks (DNNs), such as conditional variational autoencoder (cVAE). However, current DNN approaches do not explicitly account for covariate distribution differences across datasets. Here, we provide mathematical results, suggesting that not accounting for covariates can lead to suboptimal harmonization outcomes. We propose two DNN-based harmonization approaches that explicitly account for covariate distribution differences across datasets: covariate VAE (coVAE) and DeepResBat. The coVAE approach is a natural extension of cVAE by concatenating covariates and site information with site- and covariate-invariant latent representations. DeepResBat adopts a residual framework inspired by ComBat. DeepResBat first removes the effects of covariates with nonlinear regression trees, followed by eliminating site differences with cVAE. Finally, covariate effects are added back to the harmonized residuals. Using three datasets from three different continents with a total of 2787 participants and 10085 anatomical T1 scans, we find that DeepResBat and coVAE outperformed ComBat, CovBat and cVAE in terms of removing dataset differences, while enhancing biological effects of interest. However, coVAE hallucinates spurious associations between anatomical MRI and covariates even when no association exists. Therefore, future studies proposing DNN-based harmonization approaches should be aware of this false positive pitfall. Overall, our results suggest that DeepResBat is an effective deep learning alternative to ComBat.
Collapse
Affiliation(s)
- Lijun An
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- N.1 Institute for Health & Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
| | - Chen Zhang
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- N.1 Institute for Health & Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
| | - Naren Wulan
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- N.1 Institute for Health & Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
| | - Shaoshi Zhang
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- N.1 Institute for Health & Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
| | - Pansheng Chen
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- N.1 Institute for Health & Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
| | - Fang Ji
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Kwun Kei Ng
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Christopher Chen
- Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Juan Helen Zhou
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore
| | - B T Thomas Yeo
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore
- Department of Electrical and Computer Engineering, National University of Singapore, Singapore
- N.1 Institute for Health & Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
- NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore
- Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
| |
Collapse
|
7
|
Liu S, Yap PT. Learning multi-site harmonization of magnetic resonance images without traveling human phantoms. COMMUNICATIONS ENGINEERING 2024; 3:6. [PMID: 38420332 PMCID: PMC10898625 DOI: 10.1038/s44172-023-00140-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Accepted: 11/20/2023] [Indexed: 03/02/2024]
Abstract
Harmonization improves Magn. Reson. Imaging (MRI) data consistency and is central to effective integration of diverse imaging data acquired across multiple sites. Recent deep learning techniques for harmonization are predominantly supervised in nature and hence require imaging data of the same human subjects to be acquired at multiple sites. Data collection as such requires the human subjects to travel across sites and is hence challenging, costly, and impractical, more so when sufficient sample size is needed for reliable network training. Here we show how harmonization can be achieved with a deep neural network that does not rely on traveling human phantom data. Our method disentangles site-specific appearance information and site-invariant anatomical information from images acquired at multiple sites and then employs the disentangled information to generate the image of each subject for any target site. We demonstrate with more than 6,000 multi-site T1- and T2-weighted images that our method is remarkably effective in generating images with realistic site-specific appearances without altering anatomical details. Our method allows retrospective harmonization of data in a wide range of existing modern large-scale imaging studies, conducted via different scanners and protocols, without additional data collection.
Collapse
Affiliation(s)
- Siyuan Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| |
Collapse
|
8
|
Yao T, Rheault F, Cai LY, Nath V, Asad Z, Newlin N, Cui C, Deng R, Ramadass K, Shafer A, Resnick S, Schilling K, Landman BA, Huo Y. Robust fiber orientation distribution function estimation using deep constrained spherical deconvolution for diffusion-weighted magnetic resonance imaging. J Med Imaging (Bellingham) 2024; 11:014005. [PMID: 38188934 PMCID: PMC10768686 DOI: 10.1117/1.jmi.11.1.014005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 11/04/2023] [Accepted: 12/14/2023] [Indexed: 01/09/2024] Open
Abstract
Purpose Diffusion-weighted magnetic resonance imaging (DW-MRI) is a critical imaging method for capturing and modeling tissue microarchitecture at a millimeter scale. A common practice to model the measured DW-MRI signal is via fiber orientation distribution function (fODF). This function is the essential first step for the downstream tractography and connectivity analyses. With recent advantages in data sharing, large-scale multisite DW-MRI datasets are being made available for multisite studies. However, measurement variabilities (e.g., inter- and intrasite variability, hardware performance, and sequence design) are inevitable during the acquisition of DW-MRI. Most existing model-based methods [e.g., constrained spherical deconvolution (CSD)] and learning-based methods (e.g., deep learning) do not explicitly consider such variabilities in fODF modeling, which consequently leads to inferior performance on multisite and/or longitudinal diffusion studies. Approach In this paper, we propose a data-driven deep CSD method to explicitly constrain the scan-rescan variabilities for a more reproducible and robust estimation of brain microstructure from repeated DW-MRI scans. Specifically, the proposed method introduces a three-dimensional volumetric scanner-invariant regularization scheme during the fODF estimation. We study the Human Connectome Project (HCP) young adults test-retest group as well as the MASiVar dataset (with inter- and intrasite scan/rescan data). The Baltimore Longitudinal Study of Aging dataset is employed for external validation. Results From the experimental results, the proposed data-driven framework outperforms the existing benchmarks in repeated fODF estimation. By introducing the contrastive loss with scan/rescan data, the proposed method achieved a higher consistency while maintaining higher angular correlation coefficients with the CSD modeling. The proposed method is assessing the downstream connectivity analysis and shows increased performance in distinguishing subjects with different biomarkers. Conclusion We propose a deep CSD method to explicitly reduce the scan-rescan variabilities, so as to model a more reproducible and robust brain microstructure from repeated DW-MRI scans. The plug-and-play design of the proposed approach is potentially applicable to a wider range of data harmonization problems in neuroimaging.
Collapse
Affiliation(s)
- Tianyuan Yao
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Francois Rheault
- Université de Sherbrooke, Department of Computer Science, Sherbrooke, Québec, Canada
| | - Leon Y. Cai
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Vishwesh Nath
- NVIDIA Corporation, Bethesda, Maryland, United States
| | - Zuhayr Asad
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Nancy Newlin
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Can Cui
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Ruining Deng
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Karthik Ramadass
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
| | - Andrea Shafer
- National Institute on Aging, Laboratory of Behavioral Neuroscience, Baltimore, Maryland, United States
| | - Susan Resnick
- National Institute on Aging, Laboratory of Behavioral Neuroscience, Baltimore, Maryland, United States
| | - Kurt Schilling
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
| | - Bennett A. Landman
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
- Vanderbilt University, Department of Biomedical Engineering, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| | - Yuankai Huo
- Vanderbilt University, Department of Computer Science, Nashville, Tennessee, United States
- Vanderbilt University, Department of Electrical and Computer Engineering, Nashville, Tennessee, United States
| |
Collapse
|
9
|
Roca V, Kuchcinski G, Pruvo JP, Manouvriez D, Leclerc X, Lopes R. A three-dimensional deep learning model for inter-site harmonization of structural MR images of the brain: Extensive validation with a multicenter dataset. Heliyon 2023; 9:e22647. [PMID: 38107313 PMCID: PMC10724680 DOI: 10.1016/j.heliyon.2023.e22647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 10/03/2023] [Accepted: 11/15/2023] [Indexed: 12/19/2023] Open
Abstract
In multicenter MRI studies, pooling the imaging data can introduce site-related variabilities and can therefore bias the subsequent analyses. To harmonize the intensity distributions of brain MR images in a multicenter dataset, unsupervised deep learning methods can be employed. Here, we developed a model based on cycle-consistent adversarial networks for the harmonization of T1-weighted brain MR images. In contrast to previous works, it was designed to process three-dimensional whole-brain images in a stable manner while optimizing computation resources. Using six different MRI datasets for healthy adults (n=1525 in total) with different acquisition parameters, we tested the model in (i) three pairwise harmonizations with site effects of various sizes, (ii) an overall harmonization of the six datasets with different age distributions, and (iii) a traveling-subject dataset. Our results for intensity distributions, brain volumes, image quality metrics and radiomic features indicated that the MRI characteristics at the various sites had been effectively homogenized. Next, brain age prediction experiments and the observed correlation between the gray-matter volume and age showed that thanks to an appropriate training strategy and despite biological differences between the dataset populations, the model reinforced biological patterns. Furthermore, radiologic analyses of the harmonized images attested to the conservation of the radiologic information in the original images. The robustness of the harmonization model (as judged with various datasets and metrics) demonstrates its potential for application in retrospective multicenter studies.
Collapse
Affiliation(s)
- Vincent Roca
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
| | - Grégory Kuchcinski
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neurosciences & Cognition, F-59000 Lille, France
- CHU Lille, Department of Neuroradiology, F-59000 Lille, France
| | - Jean-Pierre Pruvo
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neurosciences & Cognition, F-59000 Lille, France
- CHU Lille, Department of Neuroradiology, F-59000 Lille, France
| | - Dorian Manouvriez
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
| | - Xavier Leclerc
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neurosciences & Cognition, F-59000 Lille, France
- CHU Lille, Department of Neuroradiology, F-59000 Lille, France
| | - Renaud Lopes
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neurosciences & Cognition, F-59000 Lille, France
| |
Collapse
|
10
|
Zuo L, Liu Y, Xue Y, Dewey BE, Remedios SW, Hays SP, Bilgel M, Mowry EM, Newsome SD, Calabresi PA, Resnick SM, Prince JL, Carass A. HACA3: A unified approach for multi-site MR image harmonization. Comput Med Imaging Graph 2023; 109:102285. [PMID: 37657151 PMCID: PMC10592042 DOI: 10.1016/j.compmedimag.2023.102285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Revised: 07/11/2023] [Accepted: 08/08/2023] [Indexed: 09/03/2023]
Abstract
The lack of standardization and consistency of acquisition is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations in the acquired images due to differences in hardware and acquisition parameters. In recent years, image synthesis-based MR harmonization with disentanglement has been proposed to compensate for the undesired contrast variations. The general idea is to disentangle anatomy and contrast information from MR images to achieve cross-site harmonization. Despite the success of existing methods, we argue that major improvements can be made from three aspects. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable, since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both T1-weighted and T2-weighted images), limiting their applicability. Lastly, existing methods are generally sensitive to imaging artifacts. In this paper, we present Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), a novel approach to address these three issues. HACA3 incorporates an anatomy fusion module that accounts for the inherent anatomical differences between MR contrasts. Furthermore, HACA3 can be trained and applied to any combination of MR contrasts and is robust to imaging artifacts. HACA3 is developed and evaluated on diverse MR datasets acquired from 21 sites with varying field strengths, scanner platforms, and acquisition protocols. Experiments show that HACA3 achieves state-of-the-art harmonization performance under multiple image quality metrics. We also demonstrate the versatility and potential clinical impact of HACA3 on downstream tasks including white matter lesion segmentation for people with multiple sclerosis and longitudinal volumetric analyses for normal aging subjects. Code is available at https://github.com/lianruizuo/haca3.
Collapse
Affiliation(s)
- Lianrui Zuo
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA; Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD 21224, USA.
| | - Yihao Liu
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Yuan Xue
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Blake E Dewey
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Samuel W Remedios
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA; Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD 20892, USA
| | - Savannah P Hays
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Murat Bilgel
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD 21224, USA
| | - Ellen M Mowry
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Scott D Newsome
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Peter A Calabresi
- Department of Neurology, Johns Hopkins School of Medicine, Baltimore, MD 21287, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD 21224, USA
| | - Jerry L Prince
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
11
|
Liu M, Zhu AH, Maiti P, Thomopoulos SI, Gadewar S, Chai Y, Kim H, Jahanshad N. Style transfer generative adversarial networks to harmonize multisite MRI to a single reference image to avoid overcorrection. Hum Brain Mapp 2023; 44:4875-4892. [PMID: 37471702 PMCID: PMC10472922 DOI: 10.1002/hbm.26422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Revised: 05/30/2023] [Accepted: 06/25/2023] [Indexed: 07/22/2023] Open
Abstract
Recent work within neuroimaging consortia have aimed to identify reproducible, and often subtle, brain signatures of psychiatric or neurological conditions. To allow for high-powered brain imaging analyses, it is often necessary to pool MR images that were acquired with different protocols across multiple scanners. Current retrospective harmonization techniques have shown promise in removing site-related image variation. However, most statistical approaches may over-correct for technical, scanning-related, variation as they cannot distinguish between confounded image-acquisition based variability and site-related population variability. Such statistical methods often require that datasets contain subjects or patient groups with similar clinical or demographic information to isolate the acquisition-based variability. To overcome this limitation, we consider site-related magnetic resonance (MR) imaging harmonization as a style transfer problem rather than a domain transfer problem. Using a fully unsupervised deep-learning framework based on a generative adversarial network (GAN), we show that MR images can be harmonized by inserting the style information encoded from a single reference image, without knowing their site/scanner labels a priori. We trained our model using data from five large-scale multisite datasets with varied demographics. Results demonstrated that our style-encoding model can harmonize MR images, and match intensity profiles, without relying on traveling subjects. This model also avoids the need to control for clinical, diagnostic, or demographic information. We highlight the effectiveness of our method for clinical research by comparing extracted cortical and subcortical features, brain-age estimates, and case-control effect sizes before and after the harmonization. We showed that our harmonization removed the site-related variances, while preserving the anatomical information and clinical meaningful patterns. We further demonstrated that with a diverse training set, our method successfully harmonized MR images collected from unseen scanners and protocols, suggesting a promising tool for ongoing collaborative studies. Source code is released in USC-IGC/style_transfer_harmonization (github.com).
Collapse
Affiliation(s)
- Mengting Liu
- School of Biomedical EngineeringSun Yat‐sen UniversityShenzhenChina
- USC Mark and Mary Stevens Neuroimaging and Informatics InstituteKeck School of Medicine of USC, University of Southern CaliforniaLos AngelesCaliforniaUSA
| | - Alyssa H. Zhu
- USC Mark and Mary Stevens Neuroimaging and Informatics InstituteKeck School of Medicine of USC, University of Southern CaliforniaLos AngelesCaliforniaUSA
| | - Piyush Maiti
- USC Mark and Mary Stevens Neuroimaging and Informatics InstituteKeck School of Medicine of USC, University of Southern CaliforniaLos AngelesCaliforniaUSA
| | - Sophia I. Thomopoulos
- USC Mark and Mary Stevens Neuroimaging and Informatics InstituteKeck School of Medicine of USC, University of Southern CaliforniaLos AngelesCaliforniaUSA
| | - Shruti Gadewar
- USC Mark and Mary Stevens Neuroimaging and Informatics InstituteKeck School of Medicine of USC, University of Southern CaliforniaLos AngelesCaliforniaUSA
| | - Yaqiong Chai
- USC Mark and Mary Stevens Neuroimaging and Informatics InstituteKeck School of Medicine of USC, University of Southern CaliforniaLos AngelesCaliforniaUSA
| | - Hosung Kim
- USC Mark and Mary Stevens Neuroimaging and Informatics InstituteKeck School of Medicine of USC, University of Southern CaliforniaLos AngelesCaliforniaUSA
| | - Neda Jahanshad
- USC Mark and Mary Stevens Neuroimaging and Informatics InstituteKeck School of Medicine of USC, University of Southern CaliforniaLos AngelesCaliforniaUSA
| | | |
Collapse
|
12
|
Torbati ME, Minhas DS, Laymon CM, Maillard P, Wilson JD, Chen CL, Crainiceanu CM, DeCarli CS, Hwang SJ, Tudorascu DL. MISPEL: A supervised deep learning harmonization method for multi-scanner neuroimaging data. Med Image Anal 2023; 89:102926. [PMID: 37595405 PMCID: PMC10529705 DOI: 10.1016/j.media.2023.102926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 06/06/2023] [Accepted: 08/03/2023] [Indexed: 08/20/2023]
Abstract
Large-scale data obtained from aggregation of already collected multi-site neuroimaging datasets has brought benefits such as higher statistical power, reliability, and robustness to the studies. Despite these promises from growth in sample size, substantial technical variability stemming from differences in scanner specifications exists in the aggregated data and could inadvertently bias any downstream analyses on it. Such a challenge calls for data normalization and/or harmonization frameworks, in addition to comprehensive criteria to estimate the scanner-related variability and evaluate the harmonization frameworks. In this study, we propose MISPEL (Multi-scanner Image harmonization via Structure Preserving Embedding Learning), a supervised multi-scanner harmonization method that is naturally extendable to more than two scanners. We also designed a set of criteria to investigate the scanner-related technical variability and evaluate the harmonization techniques. As an essential requirement of our criteria, we introduced a multi-scanner matched dataset of 3T T1 images across four scanners, which, to the best of our knowledge is one of the few datasets of this kind. We also investigated our evaluations using two popular segmentation frameworks: FSL and segmentation in statistical parametric mapping (SPM). Lastly, we compared MISPEL to popular methods of normalization and harmonization, namely White Stripe, RAVEL, and CALAMITI. MISPEL outperformed these methods and is promising for many other neuroimaging modalities.
Collapse
Affiliation(s)
| | - Davneet S Minhas
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Charles M Laymon
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Pauline Maillard
- Department of Neurology, University of California Davis, Davis, CA 95816, USA
| | - James D Wilson
- Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - Chang-Le Chen
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15213, USA
| | - Ciprian M Crainiceanu
- Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD 21205, USA
| | - Charles S DeCarli
- Department of Neurology, University of California Davis, Davis, CA 95816, USA
| | - Seong Jae Hwang
- Department of Artificial Intelligence, Yonsei University, Seoul, South Korea
| | - Dana L Tudorascu
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA 15213, USA; Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA; Department of Biostatistics, University of Pittsburgh, Pittsburgh, PA 15213, USA.
| |
Collapse
|
13
|
Reynolds M, Chaudhary T, Eshaghzadeh Torbati M, Tudorascu DL, Batmanghelich K. ComBat Harmonization: Empirical Bayes versus fully Bayes approaches. Neuroimage Clin 2023; 39:103472. [PMID: 37506457 PMCID: PMC10412957 DOI: 10.1016/j.nicl.2023.103472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Revised: 07/05/2023] [Accepted: 07/06/2023] [Indexed: 07/30/2023]
Abstract
Studying small effects or subtle neuroanatomical variation requires large-scale sample size data. As a result, combining neuroimaging data from multiple datasets is necessary. Variation in acquisition protocols, magnetic field strength, scanner build, and many other non-biologically related factors can introduce undesirable bias into studies. Hence, harmonization is required to remove the bias-inducing factors from the data. ComBat is one of the most common methods applied to features from structural images. ComBat models the data using a hierarchical Bayesian model and uses the empirical Bayes approach to infer the distribution of the unknown factors. The empirical Bayes harmonization method is computationally efficient and provides valid point estimates. However, it tends to underestimate uncertainty. This paper investigates a new approach, fully Bayesian ComBat, where Monte Carlo sampling is used for statistical inference. When comparing fully Bayesian and empirical Bayesian ComBat, we found Empirical Bayesian ComBat more effectively removed scanner strength information and was much more computationally efficient. Conversely, fully Bayesian ComBat better preserved biological disease and age-related information while performing more accurate harmonization on traveling subjects. The fully Bayesian approach generates a rich posterior distribution, which is useful for generating simulated imaging features for improving classifier performance in a limited data setting. We show the generative capacity of our model for augmenting and improving the detection of patients with Alzheimer's disease. Posterior distributions for harmonized imaging measures can also be used for brain-wide uncertainty comparison and more principled downstream statistical analysis.Code for our new fully Bayesian ComBat extension is available at https://github.com/batmanlab/BayesComBat.
Collapse
Affiliation(s)
- Maxwell Reynolds
- Department of Biomedical Informatics, University of Pittsburgh School of Medicine, 5607 Baum Blvd. Suite 500, Pittsburgh, PA 15206, USA.
| | - Tigmanshu Chaudhary
- Department of Biomedical Informatics, University of Pittsburgh School of Medicine, 5607 Baum Blvd. Suite 500, Pittsburgh, PA 15206, USA.
| | - Mahbaneh Eshaghzadeh Torbati
- Intelligent System Program, University of Pittsburgh School of Computing and Information, 210 South Bouquet Street, Pittsburgh, PA 15260, USA.
| | - Dana L Tudorascu
- Department of Psychiatry, University of Pittsburgh School of Medicine, 3811 O'Hara Street, Pittsburgh, PA 15213, USA; Department of Biostatistics, University of Pittsburgh, 130 De Soto Street, Pittsburgh, PA 15213, USA.
| | - Kayhan Batmanghelich
- Department of Biomedical Informatics, University of Pittsburgh School of Medicine, 5607 Baum Blvd. Suite 500, Pittsburgh, PA 15206, USA.
| |
Collapse
|
14
|
Hu F, Chen AA, Horng H, Bashyam V, Davatzikos C, Alexander-Bloch A, Li M, Shou H, Satterthwaite TD, Yu M, Shinohara RT. Image harmonization: A review of statistical and deep learning methods for removing batch effects and evaluation metrics for effective harmonization. Neuroimage 2023; 274:120125. [PMID: 37084926 PMCID: PMC10257347 DOI: 10.1016/j.neuroimage.2023.120125] [Citation(s) in RCA: 13] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 04/12/2023] [Accepted: 04/19/2023] [Indexed: 04/23/2023] Open
Abstract
Magnetic resonance imaging and computed tomography from multiple batches (e.g. sites, scanners, datasets, etc.) are increasingly used alongside complex downstream analyses to obtain new insights into the human brain. However, significant confounding due to batch-related technical variation, called batch effects, is present in this data; direct application of downstream analyses to the data may lead to biased results. Image harmonization methods seek to remove these batch effects and enable increased generalizability and reproducibility of downstream results. In this review, we describe and categorize current approaches in statistical and deep learning harmonization methods. We also describe current evaluation metrics used to assess harmonization methods and provide a standardized framework to evaluate newly-proposed methods for effective harmonization and preservation of biological information. Finally, we provide recommendations to end-users to advocate for more effective use of current methods and to methodologists to direct future efforts and accelerate development of the field.
Collapse
Affiliation(s)
- Fengling Hu
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104, United States.
| | - Andrew A Chen
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104, United States
| | - Hannah Horng
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104, United States
| | - Vishnu Bashyam
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, United States
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, United States
| | - Aaron Alexander-Bloch
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, United States; Penn-CHOP Lifespan Brain Institute, United States; Department of Child and Adolescent Psychiatry and Behavioral Science, Children's Hospital of Philadelphia, United States
| | - Mingyao Li
- Statistical Center for Single-Cell and Spatial Genomics, Perelman School of Medicine, University of Pennsylvania, United States
| | - Haochang Shou
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104, United States; Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, United States
| | - Theodore D Satterthwaite
- Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, United States; Penn-CHOP Lifespan Brain Institute, United States; The Penn Lifespan Informatics and Neuroimaging Center, Department of Psychiatry, Perelman School of Medicine, University of Pennsylvania, United States
| | - Meichen Yu
- Indiana Alzheimer's Disease Research Center, Indiana University School of Medicine, United States
| | - Russell T Shinohara
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Philadelphia, PA 19104, United States; Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine, United States
| |
Collapse
|
15
|
Sanati S, Rouhani M, Hodtani GA. Information-theoretic analysis of Hierarchical Temporal Memory-Spatial Pooler algorithm with a new upper bound for the standard information bottleneck method. Front Comput Neurosci 2023; 17:1140782. [PMID: 37351534 PMCID: PMC10282945 DOI: 10.3389/fncom.2023.1140782] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 05/17/2023] [Indexed: 06/24/2023] Open
Abstract
Hierarchical Temporal Memory (HTM) is an unsupervised algorithm in machine learning. It models several fundamental neocortical computational principles. Spatial Pooler (SP) is one of the main components of the HTM, which continuously encodes streams of binary input from various layers and regions into sparse distributed representations. In this paper, the goal is to evaluate the sparsification in the SP algorithm from the perspective of information theory by the information bottleneck (IB), Cramer-Rao lower bound, and Fisher information matrix. This paper makes two main contributions. First, we introduce a new upper bound for the standard information bottleneck relation, which we refer to as modified-IB in this paper. This measure is used to evaluate the performance of the SP algorithm in different sparsity levels and various amounts of noise. The MNIST, Fashion-MNIST and NYC-Taxi datasets were fed to the SP algorithm separately. The SP algorithm with learning was found to be resistant to noise. Adding up to 40% noise to the input resulted in no discernible change in the output. Using the probabilistic mapping method and Hidden Markov Model, the sparse SP output representation was reconstructed in the input space. In the modified-IB relation, it is numerically calculated that a lower noise level and a higher sparsity level in the SP algorithm lead to a more effective reconstruction and SP with 2% sparsity produces the best results. Our second contribution is to prove mathematically that more sparsity leads to better performance of the SP algorithm. The data distribution was considered the Cauchy distribution, and the Cramer-Rao lower bound was analyzed to estimate SP's output at different sparsity levels.
Collapse
Affiliation(s)
- Shiva Sanati
- Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Modjtaba Rouhani
- Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Ghosheh Abed Hodtani
- Department of Electrical Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
| |
Collapse
|
16
|
Hu F, Lucas A, Chen AA, Coleman K, Horng H, Ng RW, Tustison NJ, Davis KA, Shou H, Li M, Shinohara RT. DeepComBat: A Statistically Motivated, Hyperparameter-Robust, Deep Learning Approach to Harmonization of Neuroimaging Data. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.24.537396. [PMID: 37163042 PMCID: PMC10168207 DOI: 10.1101/2023.04.24.537396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
Neuroimaging data from multiple batches (i.e. acquisition sites, scanner manufacturer, datasets, etc.) are increasingly necessary to gain new insights into the human brain. However, multi-batch data, as well as extracted radiomic features, exhibit pronounced technical artifacts across batches. These batch effects introduce confounding into the data and can obscure biological effects of interest, decreasing the generalizability and reproducibility of findings. This is especially true when multi-batch data is used alongside complex downstream analysis models, such as machine learning methods. Image harmonization methods seeking to remove these batch effects are important for mitigating these issues; however, significant multivariate batch effects remain in the data following harmonization by current state-of-the-art statistical and deep learning methods. We present DeepCombat, a deep learning harmonization method based on a conditional variational autoencoder architecture and the ComBat harmonization model. DeepCombat learns and removes subject-level batch effects by accounting for the multivariate relationships between features. Additionally, DeepComBat relaxes a number of strong assumptions commonly made by previous deep learning harmonization methods and is empirically robust across a wide range of hyperparameter choices. We apply this method to neuroimaging data from a large cognitive-aging cohort and find that DeepCombat outperforms existing methods, as assessed by a battery of machine learning methods, in removing scanner effects from cortical thickness measurements while preserving biological heterogeneity. Additionally, DeepComBat provides a new perspective for statistically-motivated deep learning harmonization methods.
Collapse
Affiliation(s)
- Fengling Hu
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania
| | - Alfredo Lucas
- Center for Neuroengineering and Therapeutics, Department of Engineering, University of Pennsylvania
| | - Andrew A. Chen
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania
| | - Kyle Coleman
- Statistical Center for Single-Cell and Spatial Genomics, Perelman School of Medicine, University of Pennsylvania
| | - Hannah Horng
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania
| | | | | | - Kathryn A. Davis
- Center for Neuroengineering and Therapeutics, Department of Engineering, University of Pennsylvania
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania
| | - Haochang Shou
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine
| | - Mingyao Li
- Statistical Center for Single-Cell and Spatial Genomics, Perelman School of Medicine, University of Pennsylvania
| | - Russell T. Shinohara
- Penn Statistics in Imaging and Visualization Endeavor (PennSIVE), Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania
- Center for Biomedical Image Computing and Analytics (CBICA), Perelman School of Medicine
| | | |
Collapse
|
17
|
Gebre RK, Senjem ML, Raghavan S, Schwarz CG, Gunter JL, Hofrenning EI, Reid RI, Kantarci K, Graff-Radford J, Knopman DS, Petersen RC, Jack CR, Vemuri P. Cross-scanner harmonization methods for structural MRI may need further work: A comparison study. Neuroimage 2023; 269:119912. [PMID: 36731814 PMCID: PMC10170652 DOI: 10.1016/j.neuroimage.2023.119912] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 01/26/2023] [Accepted: 01/28/2023] [Indexed: 02/01/2023] Open
Abstract
The clinical usefulness MRI biomarkers for aging and dementia studies relies on precise brain morphological measurements; however, scanner and/or protocol variations may introduce noise or bias. One approach to address this is post-acquisition scan harmonization. In this work, we evaluate deep learning (neural style transfer, CycleGAN and CGAN), histogram matching, and statistical (ComBat and LongComBat) methods. Participants who had been scanned on both GE and Siemens scanners (cross-sectional participants, known as Crossover (n = 113), and longitudinally scanned participants on both scanners (n = 454)) were used. The goal was to match GE MPRAGE (T1-weighted) scans to Siemens improved resolution MPRAGE scans. Harmonization was performed on raw native and preprocessed (resampled, affine transformed to template space) scans. Cortical thicknesses were measured using FreeSurfer (v.7.1.1). Distributions were checked using Kolmogorov-Smirnov tests. Intra-class correlation (ICC) was used to assess the degree of agreement in the Crossover datasets and annualized percent change in cortical thickness was calculated to evaluate the Longitudinal datasets. Prior to harmonization, the least agreement was found at the frontal pole (ICC = 0.72) for the raw native scans, and at caudal anterior cingulate (0.76) and frontal pole (0.54) for the preprocessed scans. Harmonization with NST, CycleGAN, and HM improved the ICCs of the preprocessed scans at the caudal anterior cingulate (>0.81) and frontal poles (>0.67). In the Longitudinal raw native scans, over- and under-estimations of cortical thickness were observed due to the changing of the scanners. ComBat matched the cortical thickness distributions throughout but was not able to increase the ICCs or remove the effects of scanner changeover in the Longitudinal datasets. CycleGAN and NST performed slightly better to address the cortical thickness variations between scanner change. However, none of the methods succeeded in harmonizing the Longitudinal dataset. CGAN was the worst performer for both datasets. In conclusion, the performance of the methods was overall similar and region dependent. Future research is needed to improve the existing approaches since none of them outperformed each other in terms of harmonizing the datasets at all ROIs. The findings of this study establish framework for future research into the scan harmonization problem.
Collapse
Affiliation(s)
- Robel K Gebre
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA.
| | - Matthew L Senjem
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA; Department of Information Technology, Mayo Clinic, Rochester, MN 55905, USA
| | | | | | | | | | - Robert I Reid
- Department of Information Technology, Mayo Clinic, Rochester, MN 55905, USA
| | - Kejal Kantarci
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | | | - David S Knopman
- Department of Neurology, Mayo Clinic, Rochester, MN 55905, USA
| | | | - Clifford R Jack
- Department of Radiology, Mayo Clinic, Rochester, MN 55905, USA
| | | |
Collapse
|
18
|
Wang R, Bashyam V, Yang Z, Yu F, Tassopoulou V, Chintapalli SS, Skampardoni I, Sreepada LP, Sahoo D, Nikita K, Abdulkadir A, Wen J, Davatzikos C. Applications of generative adversarial networks in neuroimaging and clinical neuroscience. Neuroimage 2023; 269:119898. [PMID: 36702211 PMCID: PMC9992336 DOI: 10.1016/j.neuroimage.2023.119898] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/16/2022] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Generative adversarial networks (GANs) are one powerful type of deep learning models that have been successfully utilized in numerous fields. They belong to the broader family of generative methods, which learn to generate realistic data with a probabilistic model by learning distributions from real samples. In the clinical context, GANs have shown enhanced capabilities in capturing spatially complex, nonlinear, and potentially subtle disease effects compared to traditional generative methods. This review critically appraises the existing literature on the applications of GANs in imaging studies of various neurological conditions, including Alzheimer's disease, brain tumors, brain aging, and multiple sclerosis. We provide an intuitive explanation of various GAN methods for each application and further discuss the main challenges, open questions, and promising future directions of leveraging GANs in neuroimaging. We aim to bridge the gap between advanced deep learning methods and neurology research by highlighting how GANs can be leveraged to support clinical decision making and contribute to a better understanding of the structural and functional patterns of brain diseases.
Collapse
Affiliation(s)
- Rongguang Wang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA.
| | - Vishnu Bashyam
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Zhijian Yang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Fanyang Yu
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Vasiliki Tassopoulou
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sai Spandana Chintapalli
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Ioanna Skampardoni
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Lasya P Sreepada
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Dushyant Sahoo
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Konstantina Nikita
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Ahmed Abdulkadir
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Junhao Wen
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Christos Davatzikos
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.
| |
Collapse
|
19
|
Duan P, Xue Y, Han S, Zuo L, Carass A, Bernhard C, Hays S, Calabresi PA, Resnick SM, Duncan JS, Prince JL. RAPID BRAIN MENINGES SURFACE RECONSTRUCTION WITH LAYER TOPOLOGY GUARANTEE. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2023; 2023:10.1109/isbi53787.2023.10230668. [PMID: 37990735 PMCID: PMC10660710 DOI: 10.1109/isbi53787.2023.10230668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2023]
Abstract
The meninges, located between the skull and brain, are composed of three membrane layers: the pia, the arachnoid, and the dura. Reconstruction of these layers can aid in studying volume differences between patients with neurodegenerative diseases and normal aging subjects. In this work, we use convolutional neural networks (CNNs) to reconstruct surfaces representing meningeal layer boundaries from magnetic resonance (MR) images. We first use the CNNs to predict the signed distance functions (SDFs) representing these surfaces while preserving their anatomical ordering. The marching cubes algorithm is then used to generate continuous surface representations; both the subarachnoid space (SAS) and the intracranial volume (ICV) are computed from these surfaces. The proposed method is compared to a state-of-the-art deformable model-based reconstruction method, and we show that our method can reconstruct smoother and more accurate surfaces using less computation time. Finally, we conduct experiments with volumetric analysis on both subjects with multiple sclerosis and healthy controls. For healthy and MS subjects, ICVs and SAS volumes are found to be significantly correlated to sex (p<0.01) and age (p ≤ 0.03) changes, respectively.
Collapse
Affiliation(s)
- Peiyu Duan
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, USA
- Department of Biomedical Engineering, Yale University, USA
| | - Yuan Xue
- Department of Electrical and Computer Engineering, Johns Hopkins University, USA
| | - Shuo Han
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, USA
| | - Lianrui Zuo
- Department of Electrical and Computer Engineering, Johns Hopkins University, USA
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, Johns Hopkins University, USA
| | - Caitlyn Bernhard
- Department of Electrical and Computer Engineering, Johns Hopkins University, USA
| | - Savannah Hays
- Department of Electrical and Computer Engineering, Johns Hopkins University, USA
| | | | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, USA
| | - James S Duncan
- Department of Biomedical Engineering, Yale University, USA
| | - Jerry L Prince
- Department of Biomedical Engineering, Johns Hopkins School of Medicine, USA
- Department of Electrical and Computer Engineering, Johns Hopkins University, USA
| |
Collapse
|
20
|
Zhou Z, Li H, Srinivasan D, Abdulkadir A, Nasrallah IM, Wen J, Doshi J, Erus G, Mamourian E, Bryan NR, Wolk DA, Beason-Held L, Resnick SM, Satterthwaite TD, Davatzikos C, Shou H, Fan Y. Multiscale functional connectivity patterns of the aging brain learned from harmonized rsfMRI data of the multi-cohort iSTAGING study. Neuroimage 2023; 269:119911. [PMID: 36731813 PMCID: PMC9992322 DOI: 10.1016/j.neuroimage.2023.119911] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 01/06/2023] [Accepted: 01/28/2023] [Indexed: 02/03/2023] Open
Abstract
To learn multiscale functional connectivity patterns of the aging brain, we built a brain age prediction model of functional connectivity measures at seven scales on a large fMRI dataset, consisting of resting-state fMRI scans of 4186 individuals with a wide age range (22 to 97 years, with an average of 63) from five cohorts. We computed multiscale functional connectivity measures of individual subjects using a personalized functional network computational method, harmonized the functional connectivity measures of subjects from multiple datasets in order to build a functional brain age model, and finally evaluated how functional brain age gap correlated with cognitive measures of individual subjects. Our study has revealed that functional connectivity measures at multiple scales were more informative than those at any single scale for the brain age prediction, the data harmonization significantly improved the brain age prediction performance, and the data harmonization in the functional connectivity measures' tangent space worked better than in their original space. Moreover, brain age gap scores of individual subjects derived from the brain age prediction model were significantly correlated with clinical and cognitive measures. Overall, these results demonstrated that multiscale functional connectivity patterns learned from a large-scale multi-site rsfMRI dataset were informative for characterizing the aging brain and the derived brain age gap was associated with cognitive and clinical measures.
Collapse
Affiliation(s)
- Zhen Zhou
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| | - Hongming Li
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Dhivya Srinivasan
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Ahmed Abdulkadir
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Ilya M Nasrallah
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Junhao Wen
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Jimit Doshi
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Guray Erus
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Elizabeth Mamourian
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Nick R Bryan
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Diagnostic Medicine, University of Texas at Austin, Austin, TX, 78705, USA
| | - David A Wolk
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Neurology and Penn Memory Center, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Lori Beason-Held
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD, 20892, USA
| | - Susan M Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, Baltimore, MD, 20892, USA
| | - Theodore D Satterthwaite
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Penn Statistic in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Psychiatry, Lifespan Informatics and Neuroimaging Center, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Psychiatry, Brain Behavior Laboratory and Penn-CHOP Lifespan Brain Institute, University of Pennsylvania, Perelman School of Medicine, Philadelphia, PA, 19104, USA
| | - Christos Davatzikos
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Haochang Shou
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Penn Statistic in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA; Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
21
|
Wen G, Shim V, Holdsworth SJ, Fernandez J, Qiao M, Kasabov N, Wang A. Machine Learning for Brain MRI Data Harmonisation: A Systematic Review. Bioengineering (Basel) 2023; 10:bioengineering10040397. [PMID: 37106584 PMCID: PMC10135601 DOI: 10.3390/bioengineering10040397] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/16/2023] [Accepted: 03/21/2023] [Indexed: 04/29/2023] Open
Abstract
BACKGROUND Magnetic Resonance Imaging (MRI) data collected from multiple centres can be heterogeneous due to factors such as the scanner used and the site location. To reduce this heterogeneity, the data needs to be harmonised. In recent years, machine learning (ML) has been used to solve different types of problems related to MRI data, showing great promise. OBJECTIVE This study explores how well various ML algorithms perform in harmonising MRI data, both implicitly and explicitly, by summarising the findings in relevant peer-reviewed articles. Furthermore, it provides guidelines for the use of current methods and identifies potential future research directions. METHOD This review covers articles published through PubMed, Web of Science, and IEEE databases through June 2022. Data from studies were analysed based on the criteria of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Quality assessment questions were derived to assess the quality of the included publications. RESULTS a total of 41 articles published between 2015 and 2022 were identified and analysed. In the review, MRI data has been found to be harmonised either in an implicit (n = 21) or an explicit (n = 20) way. Three MRI modalities were identified: structural MRI (n = 28), diffusion MRI (n = 7) and functional MRI (n = 6). CONCLUSION Various ML techniques have been employed to harmonise different types of MRI data. There is currently a lack of consistent evaluation methods and metrics used across studies, and it is recommended that the issue be addressed in future studies. Harmonisation of MRI data using ML shows promises in improving performance for ML downstream tasks, while caution should be exercised when using ML-harmonised data for direct interpretation.
Collapse
Affiliation(s)
- Grace Wen
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
| | - Samantha Jane Holdsworth
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
- Mātai Medical Research Institute, Tairāwhiti-Gisborne 4010, New Zealand
- Department of Anatomy & Medical Imaging, Faculty of Medical and Health Sciences, University of Auckland, Auckland 1142, New Zealand
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
| | - Miao Qiao
- Department of Computer Science, University of Auckland, Auckland 1142, New Zealand
| | - Nikola Kasabov
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland 1010, New Zealand
- Intelligent Systems Research Centre, Ulster University, Londonderry BT52 1SA, UK
- Institute for Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
- Department of Anatomy & Medical Imaging, Faculty of Medical and Health Sciences, University of Auckland, Auckland 1142, New Zealand
| |
Collapse
|
22
|
Liu Y, Carass A, Zuo L, He Y, Han S, Gregori L, Murray S, Mishra R, Lei J, Calabresi PA, Saidha S, Prince JL. Disentangled Representation Learning for OCTA Vessel Segmentation With Limited Training Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:3686-3698. [PMID: 35862335 PMCID: PMC9910788 DOI: 10.1109/tmi.2022.3193029] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Optical coherence tomography angiography (OCTA) is an imaging modality that can be used for analyzing retinal vasculature. Quantitative assessment of en face OCTA images requires accurate segmentation of the capillaries. Using deep learning approaches for this task faces two major challenges. First, acquiring sufficient manual delineations for training can take hundreds of hours. Second, OCTA images suffer from numerous contrast-related artifacts that are currently inherent to the modality and vary dramatically across scanners. We propose to solve both problems by learning a disentanglement of an anatomy component and a local contrast component from paired OCTA scans. With the contrast removed from the anatomy component, a deep learning model that takes the anatomy component as input can learn to segment vessels with a limited portion of the training images being manually labeled. Our method demonstrates state-of-the-art performance for OCTA vessel segmentation.
Collapse
|
23
|
Bayer JMM, Thompson PM, Ching CRK, Liu M, Chen A, Panzenhagen AC, Jahanshad N, Marquand A, Schmaal L, Sämann PG. Site effects how-to and when: An overview of retrospective techniques to accommodate site effects in multi-site neuroimaging analyses. Front Neurol 2022; 13:923988. [PMID: 36388214 PMCID: PMC9661923 DOI: 10.3389/fneur.2022.923988] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 08/12/2022] [Indexed: 09/12/2023] Open
Abstract
Site differences, or systematic differences in feature distributions across multiple data-acquisition sites, are a known source of heterogeneity that may adversely affect large-scale meta- and mega-analyses of independently collected neuroimaging data. They influence nearly all multi-site imaging modalities and biomarkers, and methods to compensate for them can improve reliability and generalizability in the analysis of genetics, omics, and clinical data. The origins of statistical site effects are complex and involve both technical differences (scanner vendor, head coil, acquisition parameters, imaging processing) and differences in sample characteristics (inclusion/exclusion criteria, sample size, ancestry) between sites. In an age of expanding international consortium research, there is a growing need to disentangle technical site effects from sample characteristics of interest. Numerous statistical and machine learning methods have been developed to control for, model, or attenuate site effects - yet to date, no comprehensive review has discussed the benefits and drawbacks of each for different use cases. Here, we provide an overview of the different existing statistical and machine learning methods developed to remove unwanted site effects from independently collected neuroimaging samples. We focus on linear mixed effect models, the ComBat technique and its variants, adjustments based on image quality metrics, normative modeling, and deep learning approaches such as generative adversarial networks. For each method, we outline the statistical foundation and summarize strengths and weaknesses, including their assumptions and conditions of use. We provide information on software availability and comment on the ease of use and the applicability of these methods to different types of data. We discuss validation and comparative reports, mention caveats and provide guidance on when to use each method, depending on context and specific research questions.
Collapse
Affiliation(s)
- Johanna M. M. Bayer
- Centre for Youth Mental Health, University of Melbourne, Melbourne, VIC, Australia
- Orygen, Parkville, VIC, Australia
| | - Paul M. Thompson
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA, United States
| | - Christopher R. K. Ching
- Imaging Genetics Center, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine, University of Southern California, Marina del Rey, CA, United States
| | - Mengting Liu
- School of Biomedical Engineering, Sun Yat-sen University, Shenzhen, China
| | - Andrew Chen
- Department of Biostatistics, Epidemiology, and Informatics, Penn Statistics in Imaging and Visualization Center, University of Pennsylvania, Philadelphia, PA, United States
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, PA, United States
| | - Alana C. Panzenhagen
- Programa de Pós-graduação em Ciências Biológicas: Bioquímica, Universidade Federal do Rio Grande do Sul, Porto Alegre, Brazil
- Department of Translational Psychiatry, Max Planck Institute of Psychiatry, Munich, Germany
| | - Neda Jahanshad
- Laboratory of Brain eScience, Mark and Mary Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of USC, University of Southern California, Marina del Rey, CA, United States
| | - Andre Marquand
- Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behavior, Radboudumc, Nijmegen, Netherlands
| | - Lianne Schmaal
- Centre for Youth Mental Health, University of Melbourne, Melbourne, VIC, Australia
- Orygen, Parkville, VIC, Australia
| | | |
Collapse
|
24
|
You S, Reyes M. Influence of contrast and texture based image modifications on the performance and attention shift of U-Net models for brain tissue segmentation. FRONTIERS IN NEUROIMAGING 2022; 1:1012639. [PMID: 37555149 PMCID: PMC10406260 DOI: 10.3389/fnimg.2022.1012639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Accepted: 10/12/2022] [Indexed: 08/10/2023]
Abstract
Contrast and texture modifications applied during training or test-time have recently shown promising results to enhance the generalization performance of deep learning segmentation methods in medical image analysis. However, a deeper understanding of this phenomenon has not been investigated. In this study, we investigated this phenomenon using a controlled experimental setting, using datasets from the Human Connectome Project and a large set of simulated MR protocols, in order to mitigate data confounders and investigate possible explanations as to why model performance changes when applying different levels of contrast and texture-based modifications. Our experiments confirm previous findings regarding the improved performance of models subjected to contrast and texture modifications employed during training and/or testing time, but further show the interplay when these operations are combined, as well as the regimes of model improvement/worsening across scanning parameters. Furthermore, our findings demonstrate a spatial attention shift phenomenon of trained models, occurring for different levels of model performance, and varying in relation to the type of applied image modification.
Collapse
Affiliation(s)
- Suhang You
- Medical Image Analysis Group, ARTORG, Graduate School for Cellular and Biomedical Sciences, University of Bern, Bern, Switzerland
| | | |
Collapse
|
25
|
An L, Chen J, Chen P, Zhang C, He T, Chen C, Zhou JH, Yeo BTT. Goal-specific brain MRI harmonization. Neuroimage 2022; 263:119570. [PMID: 35987490 DOI: 10.1016/j.neuroimage.2022.119570] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 08/05/2022] [Accepted: 08/15/2022] [Indexed: 11/19/2022] Open
Abstract
There is significant interest in pooling magnetic resonance image (MRI) data from multiple datasets to enable mega-analysis. Harmonization is typically performed to reduce heterogeneity when pooling MRI data across datasets. Most MRI harmonization algorithms do not explicitly consider downstream application performance during harmonization. However, the choice of downstream application might influence what might be considered as study-specific confounds. Therefore, ignoring downstream applications during harmonization might potentially limit downstream performance. Here we propose a goal-specific harmonization framework that utilizes downstream application performance to regularize the harmonization procedure. Our framework can be integrated with a wide variety of harmonization models based on deep neural networks, such as the recently proposed conditional variational autoencoder (cVAE) harmonization model. Three datasets from three different continents with a total of 2787 participants and 10,085 anatomical T1 scans were used for evaluation. We found that cVAE removed more dataset differences than the widely used ComBat model, but at the expense of removing desirable biological information as measured by downstream prediction of mini mental state examination (MMSE) scores and clinical diagnoses. On the other hand, our goal-specific cVAE (gcVAE) was able to remove as much dataset differences as cVAE, while improving downstream cross-sectional prediction of MMSE scores and clinical diagnoses.
Collapse
Affiliation(s)
- Lijun An
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Department of Electrical and Computer Engineering, National University of Singapore, Singapore; N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
| | - Jianzhong Chen
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Department of Electrical and Computer Engineering, National University of Singapore, Singapore; N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
| | - Pansheng Chen
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Department of Electrical and Computer Engineering, National University of Singapore, Singapore; N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
| | - Chen Zhang
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Department of Electrical and Computer Engineering, National University of Singapore, Singapore; N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
| | - Tong He
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Department of Electrical and Computer Engineering, National University of Singapore, Singapore; N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore
| | - Christopher Chen
- Department of Pharmacology, Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Juan Helen Zhou
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Department of Electrical and Computer Engineering, National University of Singapore, Singapore; NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore
| | - B T Thomas Yeo
- Centre for Sleep and Cognition (CSC) & Centre for Translational Magnetic Resonance Research (TMR), Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Department of Electrical and Computer Engineering, National University of Singapore, Singapore; N.1 Institute for Health and Institute for Digital Medicine (WisDM), National University of Singapore, Singapore; NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore, Singapore; Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.
| |
Collapse
|
26
|
Liu X, Sanchez P, Thermos S, O'Neil AQ, Tsaftaris SA. Learning disentangled representations in the imaging domain. Med Image Anal 2022; 80:102516. [PMID: 35751992 DOI: 10.1016/j.media.2022.102516] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 04/05/2022] [Accepted: 06/10/2022] [Indexed: 12/12/2022]
Abstract
Disentangled representation learning has been proposed as an approach to learning general representations even in the absence of, or with limited, supervision. A good general representation can be fine-tuned for new target tasks using modest amounts of data, or used directly in unseen domains achieving remarkable performance in the corresponding task. This alleviation of the data and annotation requirements offers tantalising prospects for applications in computer vision and healthcare. In this tutorial paper, we motivate the need for disentangled representations, revisit key concepts, and describe practical building blocks and criteria for learning such representations. We survey applications in medical imaging emphasising choices made in exemplar key works, and then discuss links to computer vision applications. We conclude by presenting limitations, challenges, and opportunities.
Collapse
Affiliation(s)
- Xiao Liu
- School of Engineering, The University of Edinburgh, Edinburgh EH9 3FG, UK.
| | - Pedro Sanchez
- School of Engineering, The University of Edinburgh, Edinburgh EH9 3FG, UK
| | - Spyridon Thermos
- School of Engineering, The University of Edinburgh, Edinburgh EH9 3FG, UK
| | - Alison Q O'Neil
- School of Engineering, The University of Edinburgh, Edinburgh EH9 3FG, UK; Canon Medical Research Europe, Edinburgh EH6 5NP, UK
| | - Sotirios A Tsaftaris
- School of Engineering, The University of Edinburgh, Edinburgh EH9 3FG, UK; The Alan Turing Institute, London NW1 2DB, UK
| |
Collapse
|
27
|
Han R, Jones CK, Lee J, Zhang X, Wu P, Vagdargi P, Uneri A, Helm PA, Luciano M, Anderson WS, Siewerdsen JH. Joint synthesis and registration network for deformable MR-CBCT image registration for neurosurgical guidance. Phys Med Biol 2022; 67:10.1088/1361-6560/ac72ef. [PMID: 35609586 PMCID: PMC9801422 DOI: 10.1088/1361-6560/ac72ef] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 05/24/2022] [Indexed: 01/03/2023]
Abstract
Objective.The accuracy of navigation in minimally invasive neurosurgery is often challenged by deep brain deformations (up to 10 mm due to egress of cerebrospinal fluid during neuroendoscopic approach). We propose a deep learning-based deformable registration method to address such deformations between preoperative MR and intraoperative CBCT.Approach.The registration method uses a joint image synthesis and registration network (denoted JSR) to simultaneously synthesize MR and CBCT images to the CT domain and perform CT domain registration using a multi-resolution pyramid. JSR was first trained using a simulated dataset (simulated CBCT and simulated deformations) and then refined on real clinical images via transfer learning. The performance of the multi-resolution JSR was compared to a single-resolution architecture as well as a series of alternative registration methods (symmetric normalization (SyN), VoxelMorph, and image synthesis-based registration methods).Main results.JSR achieved median Dice coefficient (DSC) of 0.69 in deep brain structures and median target registration error (TRE) of 1.94 mm in the simulation dataset, with improvement from single-resolution architecture (median DSC = 0.68 and median TRE = 2.14 mm). Additionally, JSR achieved superior registration compared to alternative methods-e.g. SyN (median DSC = 0.54, median TRE = 2.77 mm), VoxelMorph (median DSC = 0.52, median TRE = 2.66 mm) and provided registration runtime of less than 3 s. Similarly in the clinical dataset, JSR achieved median DSC = 0.72 and median TRE = 2.05 mm.Significance.The multi-resolution JSR network resolved deep brain deformations between MR and CBCT images with performance superior to other state-of-the-art methods. The accuracy and runtime support translation of the method to further clinical studies in high-precision neurosurgery.
Collapse
Affiliation(s)
- R Han
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - C K Jones
- The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America
| | - J Lee
- Department of Radiation Oncology and Molecular Radiation Sciences, Johns Hopkins University, Baltimore, MD, United States of America
| | - X Zhang
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Wu
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P Vagdargi
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America
| | - A Uneri
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America
| | - P A Helm
- Medtronic Inc., Littleton, MA, United States of America
| | - M Luciano
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - W S Anderson
- Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| | - J H Siewerdsen
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States of America,The Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, United States of America,Department of Computer Science, Johns Hopkins University, Baltimore, MD, United States of America,Department of Neurosurgery, Johns Hopkins Hospital, Baltimore, MD, United States of America
| |
Collapse
|
28
|
Tian D, Zeng Z, Sun X, Tong Q, Li H, He H, Gao JH, He Y, Xia M. A deep learning-based multisite neuroimage harmonization framework established with a traveling-subject dataset. Neuroimage 2022; 257:119297. [PMID: 35568346 DOI: 10.1016/j.neuroimage.2022.119297] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 03/31/2022] [Accepted: 05/09/2022] [Indexed: 12/12/2022] Open
Abstract
The accumulation of multisite large-sample MRI datasets collected during large brain research projects in the last decade has provided critical resources for understanding the neurobiological mechanisms underlying cognitive functions and brain disorders. However, the significant site effects observed in imaging data and their derived structural and functional features have prevented the derivation of consistent findings across multiple studies. The development of harmonization methods that can effectively eliminate complex site effects while maintaining biological characteristics in neuroimaging data has become a vital and urgent requirement for multisite imaging studies. Here, we propose a deep learning-based framework to harmonize imaging data obtained from pairs of sites, in which site factors and brain features can be disentangled and encoded. We trained the proposed framework with a publicly available traveling subject dataset from the Strategic Research Program for Brain Sciences (SRPBS) and harmonized the gray matter volume maps derived from eight source sites to a target site. The proposed framework significantly eliminated intersite differences in gray matter volumes. The embedded encoders successfully captured both the abstract textures of site factors and the concrete brain features. Moreover, the proposed framework exhibited outstanding performance relative to conventional statistical harmonization methods in terms of site effect removal, data distribution homogenization, and intrasubject similarity improvement. Finally, the proposed harmonization network provided fixable expandability, through which new sites could be linked to the target site via indirect schema without retraining the whole model. Together, the proposed method offers a powerful and interpretable deep learning-based harmonization framework for multisite neuroimaging data that can enhance reliability and reproducibility in multisite studies regarding brain development and brain disorders.
Collapse
Affiliation(s)
- Dezheng Tian
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China; IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Zilong Zeng
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China; IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Xiaoyi Sun
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China; IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; School of Systems Science, Beijing Normal University, Beijing 100875, China
| | - Qiqi Tong
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, China
| | - Huanjie Li
- School of Biomedical Engineering, Dalian University of Technology, Dalian 116024, China
| | - Hongjian He
- Center for Brain Imaging Science and Technology, Key Laboratory for Biomedical Engineering of Ministry of Education, College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou 310027, China
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; Beijing City Key Laboratory for Medical Physics and Engineering, Institute of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China; IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Yong He
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China; IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Chinese Institute for Brain Research, Beijing 102206, China
| | - Mingrui Xia
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China; Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China; IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
29
|
Shao M, Zuo L, Carass A, Zhuo J, Gullapalli RP, Prince JL. Evaluating the impact of MR image harmonization on thalamus deep network segmentation. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12032:120320H. [PMID: 35514535 PMCID: PMC9070007 DOI: 10.1117/12.2613159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Medical image segmentation is one of the core tasks of medical image analysis. Automatic segmentation of brain magnetic resonance images (MRIs) can be used to visualize and track changes of the brain's anatomical structures that may occur due to normal aging or disease. Machine learning techniques are widely used in automatic structure segmentation. However, the contrast variation between the training and testing data makes it difficult for segmentation algorithms to generate consistent results. To address this problem, an image-to-image translation technique called MR image harmonization can be used to match the contrast between different data sets. It is important for the harmonization to transform image intensity while maintaining the underlying anatomy. In this paper, we present a 3D U-Net algorithm to segment the thalamus from multiple MR image modalities and investigate the impact of harmonization on the segmentation algorithm. Manual delineations of thalamic nuclei on two data sets are available. However, we aim to analyze the thalamus in another large data set where ground truth labels are lacking. We trained two segmentation networks, one with unharmonized images and the other with harmonized images, on one data set with manual labels, and compared their performances on the other data set with manual labels. These two data groups were diagnosed with two brain disorders and were acquired with similar imaging protocols. The harmonization target is the large data set without manual labels, which also has a different imaging protocol. The networks trained on unharmonized and harmonized data showed no significant difference when evaluating on the other data set; demonstrating that image harmonization can maintain the anatomy and does not affect the segmentation task. The two networks were evaluated on the harmonization target data set and the network trained on harmonized data showed significant improvement over the network trained on unharmonized data. Therefore, the network trained on harmonized data provides the potential to process large amounts of data from other sites, even in the absence of site-specific training data.
Collapse
Affiliation(s)
- Muhan Shao
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Lianrui Zuo
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institute of Health, Baltimore, MD 21224, USA
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
| | - Jiachen Zhuo
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Rao P. Gullapalli
- Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD, USA
| | - Jerry L. Prince
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218, USA
| |
Collapse
|
30
|
Duan P, Han S, Zuo L, An Y, Liu Y, Alshareef A, Lee J, Carass A, Resnick SM, Prince JL. Cranial Meninges Reconstruction Based on Convolutional Networks and Deformable Models: Applications to Longitudinal Study of Normal Aging. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2022; 12032:1203215. [PMID: 36325254 PMCID: PMC9623767 DOI: 10.1117/12.2613146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The cranial meninges are membranes enveloping the brain. The space between these membranes contains mainly cerebrospinal fluid. It is of interest to study how the volumes of this space change with respect to normal aging. In this work, we propose to combine convolutional neural networks (CNNs) with nested topology-preserving geometric deformable models (NTGDMs) to reconstruct meningeal surfaces from magnetic resonance (MR) images. We first use CNNs to predict implicit representations of these surfaces then refine them with NTGDMs to achieve sub-voxel accuracy while maintaining spherical topology and the correct anatomical ordering. MR contrast harmonization is used to match the contrasts between training and testing images. We applied our algorithm to a subset of healthy subjects from the Baltimore Longitudinal Study of Aging for demonstration purposes and conducted longitudinal statistical analysis of the intracranial volume (ICV) and subarachnoid space (SAS) volume. We found a statistically significant decrease in the ICV and an increase in the SAS volume with respect to normal aging.
Collapse
Affiliation(s)
- Peiyu Duan
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD 21218
| | - Shuo Han
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD 21218
| | - Lianrui Zuo
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD 20892
| | - Yang An
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD 20892
| | - Yihao Liu
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218
| | - Ahmed Alshareef
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218
| | - Junghoon Lee
- Department of Radiology, The Johns Hopkins School of Medicine, Baltimore, MD 21287
| | - Aaron Carass
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218
| | - Susan M. Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging, National Institutes of Health, Baltimore, MD 20892
| | - Jerry L. Prince
- Department of Biomedical Engineering, The Johns Hopkins University, Baltimore, MD 21218
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD 21218
| |
Collapse
|