1
|
Deng L, Tang Y, Zhang X, Chen J. Structure-adaptive canonical correlation analysis for microbiome multi-omics data. Front Genet 2024; 15:1489694. [PMID: 39655222 PMCID: PMC11626081 DOI: 10.3389/fgene.2024.1489694] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2024] [Accepted: 10/31/2024] [Indexed: 12/12/2024] Open
Abstract
Sparse canonical correlation analysis (sCCA) has been a useful approach for integrating different high-dimensional datasets by finding a subset of correlated features that explain the most correlation in the data. In the context of microbiome studies, investigators are always interested in knowing how the microbiome interacts with the host at different molecular levels such as genome, methylol, transcriptome, metabolome and proteome. sCCA provides a simple approach for exploiting the correlation structure among multiple omics data and finding a set of correlated omics features, which could contribute to understanding the host-microbiome interaction. However, existing sCCA methods do not address compositionality, and its application to microbiome data is thus not optimal. This paper proposes a new sCCA framework for integrating microbiome data with other high-dimensional omics data, accounting for the compositional nature of microbiome sequencing data. It also allows integrating prior structure information such as the grouping structure among bacterial taxa by imposing a "soft" constraint on the coefficients through varying penalization strength. As a result, the method provides significant improvement when the structure is informative while maintaining robustness against a misspecified structure. Through extensive simulation studies and real data analysis, we demonstrate the superiority of the proposed framework over the state-of-the-art approaches.
Collapse
Affiliation(s)
- Linsui Deng
- School of Data Science, The Chinese University of Hong Kong, Shenzhen, China
| | - Yanlin Tang
- School of Statistics, East China Normal University, Shanghai, China
| | - Xianyang Zhang
- Department of Statistics, Texas A&M University, College Station, TX, United States
| | - Jun Chen
- Department of Quantitative Health Sciences, Mayo Clinic, Rochester, MN, United States
| |
Collapse
|
2
|
Che Y, Du L, Tang G, Ling S. A Biometric Identification for Multi-Modal Biomedical Signals in Geriatric Care. SENSORS (BASEL, SWITZERLAND) 2024; 24:6558. [PMID: 39460036 PMCID: PMC11511392 DOI: 10.3390/s24206558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/21/2024] [Revised: 10/04/2024] [Accepted: 10/09/2024] [Indexed: 10/28/2024]
Abstract
With the acceleration of global population aging, the elderly have an increasing demand for home care and nursing institutions, and the significance of health prevention and management of the elderly has become increasingly prominent. In this context, we propose a biometric recognition method for multi-modal biomedical signals. This article focuses on three key signals that can be picked up by wearable devices: ECG, PPG, and breath (RESP). The RESP signal is introduced into the existing two-mode signal identification for multi-mode identification. Firstly, the features of the signal in the time-frequency domain are extracted. To represent deep features in a low-dimensional feature space and expedite authentication tasks, PCA and LDA are employed for dimensionality reduction. MCCA is used for feature fusion, and SVM is used for identification. The accuracy and performance of the system were evaluated using both public data sets and self-collected data sets, with an accuracy of more than 99.5%. The experimental data fully show that this method significantly improves the accuracy of identity recognition. In the future, combined with the signal monitoring function of wearable devices, it can quickly identify individual elderly people with abnormal conditions, provide safer and more efficient medical services for the elderly, and relieve the pressure on medical resources.
Collapse
Affiliation(s)
- Yue Che
- School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong 643000, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Zigong 643000, China
| | - Lingyan Du
- School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong 643000, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Zigong 643000, China
| | - Guozhi Tang
- School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong 643000, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Zigong 643000, China
| | - Shihai Ling
- School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong 643000, China
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Zigong 643000, China
| |
Collapse
|
3
|
Lei B, Li Y, Fu W, Yang P, Chen S, Wang T, Xiao X, Niu T, Fu Y, Wang S, Han H, Qin J. Alzheimer's disease diagnosis from multi-modal data via feature inductive learning and dual multilevel graph neural network. Med Image Anal 2024; 97:103213. [PMID: 38850625 DOI: 10.1016/j.media.2024.103213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Revised: 05/13/2024] [Accepted: 05/17/2024] [Indexed: 06/10/2024]
Abstract
Multi-modal data can provide complementary information of Alzheimer's disease (AD) and its development from different perspectives. Such information is closely related to the diagnosis, prevention, and treatment of AD, and hence it is necessary and critical to study AD through multi-modal data. Existing learning methods, however, usually ignore the influence of feature heterogeneity and directly fuse features in the last stages. Furthermore, most of these methods only focus on local fusion features or global fusion features, neglecting the complementariness of features at different levels and thus not sufficiently leveraging information embedded in multi-modal data. To overcome these shortcomings, we propose a novel framework for AD diagnosis that fuses gene, imaging, protein, and clinical data. Our framework learns feature representations under the same feature space for different modalities through a feature induction learning (FIL) module, thereby alleviating the impact of feature heterogeneity. Furthermore, in our framework, local and global salient multi-modal feature interaction information at different levels is extracted through a novel dual multilevel graph neural network (DMGNN). We extensively validate the proposed method on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and experimental results demonstrate our method consistently outperforms other state-of-the-art multi-modal fusion methods. The code is publicly available on the GitHub website. (https://github.com/xiankantingqianxue/MIA-code.git).
Collapse
Affiliation(s)
- Baiying Lei
- National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
| | - Yafeng Li
- National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
| | - Wanyi Fu
- Department of Electronic Engineering, Tsinghua University, Beijing Key Laboratory of Magnetic Resonance Imaging Devices and Technology, China
| | - Peng Yang
- National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
| | - Shaobin Chen
- National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
| | - Tianfu Wang
- National-Regional Key Technology Engineering Lab. for Medical Ultrasound, Guangdong Key Lab. for Biomedical Measurements and Ultrasound Imaging, Marshall Lab. of Biomedical Engineering, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, 518060, China
| | - Xiaohua Xiao
- The First Affiliated Hospital of Shenzhen University, Shenzhen University Medical School, Shenzhen University, Shenzhen Second People's Hospital, Shenzhen, 530031, China
| | - Tianye Niu
- Shenzhen Bay Laboratory, Shenzhen, 518067, China
| | - Yu Fu
- Department of Neurology, Peking University Third Hospital, No. 49, North Garden Rd., Haidian District, Beijing, 100191, China.
| | - Shuqiang Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, 518055, China.
| | - Hongbin Han
- Institute of Medical Technology, Peking University Health Science Center, Department of Radiology, Peking University Third Hospital, Beijing Key Laboratory of Magnetic Resonance Imaging Devices and Technology, Beijing, 100191, China; The second hospital of Dalian Medical University,Research and developing center of medical technology, Dalian, 116027, China.
| | - Jing Qin
- Center for Smart Health, School of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
4
|
Li S, Zhang R. A novel interactive deep cascade spectral graph convolutional network with multi-relational graphs for disease prediction. Neural Netw 2024; 175:106285. [PMID: 38593556 DOI: 10.1016/j.neunet.2024.106285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Revised: 11/16/2023] [Accepted: 03/29/2024] [Indexed: 04/11/2024]
Abstract
Graph neural networks (GNNs) have recently grown in popularity for disease prediction. Existing GNN-based methods primarily build the graph topological structure around a single modality and combine it with other modalities to acquire feature representations of acquisitions. The complicated relationship in each modality, however, may not be well highlighted due to its specificity. Further, relatively shallow networks restrict adequate extraction of high-level features, affecting disease prediction performance. Accordingly, this paper develops a new interactive deep cascade spectral graph convolutional network with multi-relational graphs (IDCGN) for disease prediction tasks. Its crucial points lie in constructing multiple relational graphs and dual cascade spectral graph convolution branches with interaction (DCSGBI). Specifically, the former designs a pairwise imaging-based edge generator and a pairwise non-imaging-based edge generator from different modalities by devising two learnable networks, which adaptively capture graph structures and provide various views of the same acquisition to aid in disease diagnosis. Again, DCSGBI is established to enrich high-level semantic information and low-level details of disease data. It devises a cascade spectral graph convolution operator for each branch and incorporates the interaction strategy between different branches into the network, successfully forming a deep model and capturing complementary information from diverse branches. In this manner, more favorable and sufficient features are learned for a reliable diagnosis. Experiments on several disease datasets reveal that IDCGN exceeds state-of-the-art models and achieves promising results.
Collapse
Affiliation(s)
- Sihui Li
- Medical Big data Research Center, School of Mathematics, Northwest University, Xi'an 710127, Shaanxi, China.
| | - Rui Zhang
- Medical Big data Research Center, School of Mathematics, Northwest University, Xi'an 710127, Shaanxi, China.
| |
Collapse
|
5
|
Subramanian V, Syeda-Mahmood T, Do MN. Modelling-based joint embedding of histology and genomics using canonical correlation analysis for breast cancer survival prediction. Artif Intell Med 2024; 149:102787. [PMID: 38462287 DOI: 10.1016/j.artmed.2024.102787] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 01/23/2024] [Accepted: 01/24/2024] [Indexed: 03/12/2024]
Abstract
Traditional approaches to predicting breast cancer patients' survival outcomes were based on clinical subgroups, the PAM50 genes, or the histological tissue's evaluation. With the growth of multi-modality datasets capturing diverse information (such as genomics, histology, radiology and clinical data) about the same cancer, information can be integrated using advanced tools and have improved survival prediction. These methods implicitly exploit the key observation that different modalities originate from the same cancer source and jointly provide a complete picture of the cancer. In this work, we investigate the benefits of explicitly modelling multi-modality data as originating from the same cancer under a probabilistic framework. Specifically, we consider histology and genomics as two modalities originating from the same breast cancer under a probabilistic graphical model (PGM). We construct maximum likelihood estimates of the PGM parameters based on canonical correlation analysis (CCA) and then infer the underlying properties of the cancer patient, such as survival. Equivalently, we construct CCA-based joint embeddings of the two modalities and input them to a learnable predictor. Real-world properties of sparsity and graph-structures are captured in the penalized variants of CCA (pCCA) and are better suited for cancer applications. For generating richer multi-dimensional embeddings with pCCA, we introduce two novel embedding schemes that encourage orthogonality to generate more informative embeddings. The efficacy of our proposed prediction pipeline is first demonstrated via low prediction errors of the hidden variable and the generation of informative embeddings on simulated data. When applied to breast cancer histology and RNA-sequencing expression data from The Cancer Genome Atlas (TCGA), our model can provide survival predictions with average concordance-indices of up to 68.32% along with interpretability. We also illustrate how the pCCA embeddings can be used for survival analysis through Kaplan-Meier curves.
Collapse
Affiliation(s)
- Vaishnavi Subramanian
- Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, 61801, IL, USA.
| | | | - Minh N Do
- Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, Urbana, 61801, IL, USA
| |
Collapse
|
6
|
Ling Q, Liu A, Li Y, Mi T, Chan P, Liu Y, Chen X. Homogeneous-Multiset-CCA-Based Brain Covariation and Contravariance Connectivity Network Modeling. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3556-3565. [PMID: 37682656 DOI: 10.1109/tnsre.2023.3310340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/10/2023]
Abstract
Brain connectivity networks based on functional magnetic resonance imaging (fMRI) have expanded our understanding of brain functions in both healthy and diseased states. However, most current studies construct connectivity networks using averaged regional time courses with the strong assumption that the activities of voxels contained in each brain region are similar, ignoring their possible variations. Additionally, pairwise correlation analysis is often adopted with more attention to positive relationships, while joint interactions at the network level as well as anti-correlations are less investigated. In this paper, to provide a new strategy for regional activity representation and brain connectivity modeling, a novel homogeneous multiset canonical correlation analysis (HMCCA) model is proposed, which enforces sign constraints on the weights of voxels to guarantee homogeneity within each brain region. It is capable of obtaining regional representative signals and constructing covariation and contravariance networks simultaneously, at both group and subject levels. Validations on two sessions of fMRI data verified its reproducibility and reliability when dealing with brain connectivity networks. Further experiments on subjects with and without Parkinson's disease (PD) revealed significant alterations in brain connectivity patterns, which were further associated with clinical scores and demonstrated superior prediction ability, indicating its potential in clinical practice.
Collapse
|
7
|
Mandal A, Maji P. Multiview Regularized Discriminant Canonical Correlation Analysis: Sequential Extraction of Relevant Features From Multiblock Data. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:5497-5509. [PMID: 35417362 DOI: 10.1109/tcyb.2022.3155875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
One of the important issues associated with real-life high-dimensional data analysis is how to extract significant and relevant features from multiview data. The multiset canonical correlation analysis (MCCA) is a well-known statistical method for multiview data integration. It finds a linear subspace that maximizes the correlations among different views. However, the existing methods to find the multiset canonical variables are computationally very expensive, which restricts the application of the MCCA in real-life big data analysis. The covariance matrix of each high-dimensional view may also suffer from the singularity problem due to the limited number of samples. Moreover, the MCCA-based existing feature extraction algorithms are, in general, unsupervised in nature. In this regard, a new supervised feature extraction algorithm is proposed, which integrates multimodal multidimensional data sets by solving maximal correlation problem of the MCCA. A new block matrix representation is introduced to reduce the computational complexity for computing the canonical variables of the MCCA. The analytical formulation enables efficient computation of the multiset canonical variables under supervised ridge regression optimization technique. It deals with the "curse of dimensionality" problem associated with high-dimensional data and facilitates the sequential generation of relevant features with significantly lower computational cost. The effectiveness of the proposed multiblock data integration algorithm, along with a comparison with other existing methods, is demonstrated on several benchmark and real-life cancer data.
Collapse
|
8
|
Zarghami TS, Zeidman P, Razi A, Bahrami F, Hossein‐Zadeh G. Dysconnection and cognition in schizophrenia: A spectral dynamic causal modeling study. Hum Brain Mapp 2023; 44:2873-2896. [PMID: 36852654 PMCID: PMC10089110 DOI: 10.1002/hbm.26251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 01/28/2023] [Accepted: 02/13/2023] [Indexed: 03/01/2023] Open
Abstract
Schizophrenia (SZ) is a severe mental disorder characterized by failure of functional integration (aka dysconnection) across the brain. Recent functional connectivity (FC) studies have adopted functional parcellations to define subnetworks of large-scale networks, and to characterize the (dys)connection between them, in normal and clinical populations. While FC examines statistical dependencies between observations, model-based effective connectivity (EC) can disclose the causal influences that underwrite the observed dependencies. In this study, we investigated resting state EC within seven large-scale networks, in 66 SZ and 74 healthy subjects from a public dataset. The results showed that a remarkable 33% of the effective connections (among subnetworks) of the cognitive control network had been pathologically modulated in SZ. Further dysconnection was identified within the visual, default mode and sensorimotor networks of SZ subjects, with 24%, 20%, and 11% aberrant couplings. Overall, the proportion of discriminative connections was remarkably larger in EC (24%) than FC (1%) analysis. Subsequently, to study the neural correlates of impaired cognition in SZ, we conducted a canonical correlation analysis between the EC parameters and the cognitive scores of the patients. As such, the self-inhibitions of supplementary motor area and paracentral lobule (in the sensorimotor network) and the excitatory connection from parahippocampal gyrus to inferior temporal gyrus (in the cognitive control network) were significantly correlated with the social cognition, reasoning/problem solving and working memory capabilities of the patients. Future research can investigate the potential of whole-brain EC as a biomarker for diagnosis of brain disorders and for neuroimaging-based cognitive assessment.
Collapse
Affiliation(s)
- Tahereh S. Zarghami
- Bio‐Electric Department, School of Electrical and Computer Engineering, College of EngineeringUniversity of TeranTehranIran
- Human Motor Control and Computational Neuroscience Laboratory, School of Electrical and Computer Engineering, College of EngineeringUniversity of TehranTehranIran
| | - Peter Zeidman
- The Wellcome Centre for Human NeuroimagingUniversity College LondonLondonUK
| | - Adeel Razi
- The Wellcome Centre for Human NeuroimagingUniversity College LondonLondonUK
- Turner Institute for Brain and Mental HealthMonash UniversityClaytonVictoriaAustralia
- Monash Biomedical ImagingMonash UniversityClaytonVictoriaAustralia
- CIFAR Azrieli Global Scholars Program, CIFARTorontoCanada
| | - Fariba Bahrami
- Bio‐Electric Department, School of Electrical and Computer Engineering, College of EngineeringUniversity of TeranTehranIran
- Human Motor Control and Computational Neuroscience Laboratory, School of Electrical and Computer Engineering, College of EngineeringUniversity of TehranTehranIran
| | - Gholam‐Ali Hossein‐Zadeh
- Bio‐Electric Department, School of Electrical and Computer Engineering, College of EngineeringUniversity of TeranTehranIran
| |
Collapse
|
9
|
Liu L, Chang J, Zhang P, Ma Q, Zhang H, Sun T, Qiao H. A joint multi-modal learning method for early-stage knee osteoarthritis disease classification. Heliyon 2023; 9:e15461. [PMID: 37123973 PMCID: PMC10130858 DOI: 10.1016/j.heliyon.2023.e15461] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Revised: 04/05/2023] [Accepted: 04/10/2023] [Indexed: 05/02/2023] Open
Abstract
Osteoarthritis (OA) is a progressive and chronic disease. Identifying the early stages of OA disease is important for the treatment and care of patients. However, most state-of-the-art methods only use single-modal data to predict disease status, so that these methods usually ignore complementary information in multi-modal data. In this study, we develop an integrated multi-modal learning method (MMLM) that uses an interpretable strategy to select and fuse clinical, imaging, and demographic features to classify the grade of early-stage knee OA disease. MMLM applies XGboost and ResNet50 to extract two heterogeneous features from the clinical data and imaging data, respectively. And then we integrate these extracted features with demographic data. To avoid the negative effects of redundant features in a direct integration of multiple features, we propose a L1-norm-based optimization method (MMLM) to regularize the inter-correlations among the multiple features. MMLM was assessed using the Osteoarthritis Initiative (OAI) data set with machine learning classifiers. Extensive experiments demonstrate that MMLM improves the performance of the classifiers. Furthermore, a visual analysis of the important features in the multimodal data verified the relations among the modalities when classifying the grade of knee OA disease.
Collapse
|
10
|
Zhang G, Nie X, Liu B, Yuan H, Li J, Sun W, Huang S. A multimodal fusion method for Alzheimer's disease based on DCT convolutional sparse representation. Front Neurosci 2023; 16:1100812. [PMID: 36685238 PMCID: PMC9853298 DOI: 10.3389/fnins.2022.1100812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Accepted: 12/07/2022] [Indexed: 01/07/2023] Open
Abstract
Introduction The medical information contained in magnetic resonance imaging (MRI) and positron emission tomography (PET) has driven the development of intelligent diagnosis of Alzheimer's disease (AD) and multimodal medical imaging. To solve the problems of severe energy loss, low contrast of fused images and spatial inconsistency in the traditional multimodal medical image fusion methods based on sparse representation. A multimodal fusion algorithm for Alzheimer' s disease based on the discrete cosine transform (DCT) convolutional sparse representation is proposed. Methods The algorithm first performs a multi-scale DCT decomposition of the source medical images and uses the sub-images of different scales as training images, respectively. Different sparse coefficients are obtained by optimally solving the sub-dictionaries at different scales using alternating directional multiplication method (ADMM). Secondly, the coefficients of high-frequency and low-frequency subimages are inverse DCTed using an improved L1 parametric rule combined with improved spatial frequency novel sum-modified SF (NMSF) to obtain the final fused images. Results and discussion Through extensive experimental results, we show that our proposed method has good performance in contrast enhancement, texture and contour information retention.
Collapse
Affiliation(s)
- Guo Zhang
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China,School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Xixi Nie
- Chongqing Key Laboratory of Image Cognition, College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Bangtao Liu
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Hong Yuan
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Jin Li
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, China
| | - Weiwei Sun
- School of Optoelectronic Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China,*Correspondence: Weiwei Sun,
| | - Shixin Huang
- School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China,Department of Scientific Research, The People’s Hospital of Yubei District of Chongqing City, Yubei, China,Shixin Huang,
| |
Collapse
|
11
|
Huang W, Tan K, Zhang Z, Hu J, Dong S. A Review of Fusion Methods for Omics and Imaging Data. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2023; 20:74-93. [PMID: 35044920 DOI: 10.1109/tcbb.2022.3143900] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The development of omics data and biomedical images has greatly advanced the progress of precision medicine in diagnosis, treatment, and prognosis. The fusion of omics and imaging data, i.e., omics-imaging fusion, offers a new strategy for understanding complex diseases. However, due to a variety of issues such as the limited number of samples, high dimensionality of features, and heterogeneity of different data types, efficiently learning complementary or associated discriminative fusion information from omics and imaging data remains a challenge. Recently, numerous machine learning methods have been proposed to alleviate these problems. In this review, from the perspective of fusion levels and fusion methods, we first provide an overview of preprocessing and feature extraction methods for omics and imaging data, and comprehensively analyze and summarize the basic forms and variations of commonly used and newly emerging fusion methods, along with their advantages, disadvantages and the applicable scope. We then describe public datasets and compare experimental results of various fusion methods on the ADNI and TCGA datasets. Finally, we discuss future prospects and highlight remaining challenges in the field.
Collapse
|
12
|
Zheng S, Zhu Z, Liu Z, Guo Z, Liu Y, Yang Y, Zhao Y. Multi-Modal Graph Learning for Disease Prediction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2207-2216. [PMID: 35286257 DOI: 10.1109/tmi.2022.3159264] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Benefiting from the powerful expressive capability of graphs, graph-based approaches have been popularly applied to handle multi-modal medical data and achieved impressive performance in various biomedical applications. For disease prediction tasks, most existing graph-based methods tend to define the graph manually based on specified modality (e.g., demographic information), and then integrated other modalities to obtain the patient representation by Graph Representation Learning (GRL). However, constructing an appropriate graph in advance is not a simple matter for these methods. Meanwhile, the complex correlation between modalities is ignored. These factors inevitably yield the inadequacy of providing sufficient information about the patient's condition for a reliable diagnosis. To this end, we propose an end-to-end Multi-modal Graph Learning framework (MMGL) for disease prediction with multi-modality. To effectively exploit the rich information across multi-modality associated with the disease, modality-aware representation learning is proposed to aggregate the features of each modality by leveraging the correlation and complementarity between the modalities. Furthermore, instead of defining the graph manually, the latent graph structure is captured through an effective way of adaptive graph learning. It could be jointly optimized with the prediction model, thus revealing the intrinsic connections among samples. Our model is also applicable to the scenario of inductive learning for those unseen data. An extensive group of experiments on two disease prediction tasks demonstrates that the proposed MMGL achieves more favorable performance. The code of MMGL is available at https://github.com/SsGood/MMGL.
Collapse
|
13
|
Dashtestani H, Miguel HO, Condy EE, Zeytinoglu S, Millerhagen JB, Debnath R, Smith E, Adali T, Fox NA, Gandjbakhche AH. Structured sparse multiset canonical correlation analysis of simultaneous fNIRS and EEG provides new insights into the human action-observation network. Sci Rep 2022; 12:6878. [PMID: 35477980 PMCID: PMC9046278 DOI: 10.1038/s41598-022-10942-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 04/13/2022] [Indexed: 11/25/2022] Open
Abstract
The action observation network (AON) is a network of brain regions involved in the execution and observation of a given action. The AON has been investigated in humans using mostly electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI), but shared neural correlates of action observation and action execution are still unclear due to lack of ecologically valid neuroimaging measures. In this study, we used concurrent EEG and functional Near Infrared Spectroscopy (fNIRS) to examine the AON during a live-action observation and execution paradigm. We developed structured sparse multiset canonical correlation analysis (ssmCCA) to perform EEG-fNIRS data fusion. MCCA is a generalization of CCA to more than two sets of variables and is commonly used in medical multimodal data fusion. However, mCCA suffers from multi-collinearity, high dimensionality, unimodal feature selection, and loss of spatial information in interpreting the results. A limited number of participants (small sample size) is another problem in mCCA, which leads to overfitted models. Here, we adopted graph-guided (structured) fused least absolute shrinkage and selection operator (LASSO) penalty to mCCA to conduct feature selection, incorporating structural information amongst the variables (i.e., brain regions). Benefitting from concurrent recordings of brain hemodynamic and electrophysiological responses, the proposed ssmCCA finds linear transforms of each modality such that the correlation between their projections is maximized. Our analysis of 21 right-handed participants indicated that the left inferior parietal region was active during both action execution and action observation. Our findings provide new insights into the neural correlates of AON which are more fine-tuned than the results from each individual EEG or fNIRS analysis and validate the use of ssmCCA to fuse EEG and fNIRS datasets.
Collapse
Affiliation(s)
- Hadis Dashtestani
- Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), National Institutes of Health, Bethesda, MD, USA
| | - Helga O Miguel
- Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), National Institutes of Health, Bethesda, MD, USA
| | - Emma E Condy
- Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), National Institutes of Health, Bethesda, MD, USA
| | - Selin Zeytinoglu
- Department of Human Development and Quantitative Methodology, University of Maryland, College Park, MD, USA
| | - John B Millerhagen
- Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), National Institutes of Health, Bethesda, MD, USA
| | | | - Elizabeth Smith
- Behavioral Medicine and Clinical Psychology Department, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Tulay Adali
- Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County, Baltimore, MD, USA
| | - Nathan A Fox
- Department of Human Development and Quantitative Methodology, University of Maryland, College Park, MD, USA
| | - Amir H Gandjbakhche
- Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), National Institutes of Health, Bethesda, MD, USA.
| |
Collapse
|
14
|
Farha NA, Al-Shargie F, Tariq U, Al-Nashash H. Brain Region-Based Vigilance Assessment Using Electroencephalography and Eye Tracking Data Fusion. IEEE ACCESS 2022; 10:112199-112210. [DOI: 10.1109/access.2022.3216407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Affiliation(s)
- Nadia Abu Farha
- Biomedical Engineering Graduate Program, American University of Sharjah, Sharjah, United Arab Emirates
| | - Fares Al-Shargie
- Biomedical Engineering Graduate Program, American University of Sharjah, Sharjah, United Arab Emirates
| | - Usman Tariq
- Biomedical Engineering Graduate Program, American University of Sharjah, Sharjah, United Arab Emirates
| | - Hasan Al-Nashash
- Biomedical Engineering Graduate Program, American University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
15
|
Mohammadi-Nejad AR, Hossein-Zadeh GA, Shahsavand Ananloo E, Soltanian-Zadeh H. The effect of groupness constraint on the sensitivity and specificity of canonical correlation analysis, a multi-modal anatomical and functional MRI study. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102698] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
16
|
Silva RF, Plis SM. Multidataset Independent Subspace Analysis With Application to Multimodal Fusion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:588-602. [PMID: 33031036 PMCID: PMC7877797 DOI: 10.1109/tip.2020.3028452] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Unsupervised latent variable models-blind source separation (BSS) especially-enjoy a strong reputation for their interpretability. But they seldom combine the rich diversity of information available in multiple datasets, even though multidatasets yield insightful joint solutions otherwise unavailable in isolation. We present a direct, principled approach to multidataset combination that takes advantage of multidimensional subspace structures. In turn, we extend BSS models to capture the underlying modes of shared and unique variability across and within datasets. Our approach leverages joint information from heterogeneous datasets in a flexible and synergistic fashion. We call this method multidataset independent subspace analysis (MISA). Methodological innovations exploiting the Kotz distribution for subspace modeling, in conjunction with a novel combinatorial optimization for evasion of local minima, enable MISA to produce a robust generalization of independent component analysis (ICA), independent vector analysis (IVA), and independent subspace analysis (ISA) in a single unified model. We highlight the utility of MISA for multimodal information fusion, including sample-poor regimes ( N = 600 ) and low signal-to-noise ratio, promoting novel applications in both unimodal and multimodal brain imaging data.
Collapse
Affiliation(s)
| | - Sergey M. Plis
- tri-institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, and Emory University, Atlanta, GA
- The Mind Research Network, Albuquerque, NM USA
| |
Collapse
|
17
|
Shao W, Wang T, Sun L, Dong T, Han Z, Huang Z, Zhang J, Zhang D, Huang K. Multi-task multi-modal learning for joint diagnosis and prognosis of human cancers. Med Image Anal 2020; 65:101795. [DOI: 10.1016/j.media.2020.101795] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 05/31/2020] [Accepted: 07/17/2020] [Indexed: 01/10/2023]
|
18
|
Mosayebi R, Hossein-Zadeh GA. Correlated coupled matrix tensor factorization method for simultaneous EEG-fMRI data fusion. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.102071] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
Zhuang X, Yang Z, Cordes D. A technical review of canonical correlation analysis for neuroscience applications. Hum Brain Mapp 2020; 41:3807-3833. [PMID: 32592530 PMCID: PMC7416047 DOI: 10.1002/hbm.25090] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Accepted: 05/23/2020] [Indexed: 12/11/2022] Open
Abstract
Collecting comprehensive data sets of the same subject has become a standard in neuroscience research and uncovering multivariate relationships among collected data sets have gained significant attentions in recent years. Canonical correlation analysis (CCA) is one of the powerful multivariate tools to jointly investigate relationships among multiple data sets, which can uncover disease or environmental effects in various modalities simultaneously and characterize changes during development, aging, and disease progressions comprehensively. In the past 10 years, despite an increasing number of studies have utilized CCA in multivariate analysis, simple conventional CCA dominates these applications. Multiple CCA-variant techniques have been proposed to improve the model performance; however, the complicated multivariate formulations and not well-known capabilities have delayed their wide applications. Therefore, in this study, a comprehensive review of CCA and its variant techniques is provided. Detailed technical formulation with analytical and numerical solutions, current applications in neuroscience research, and advantages and limitations of each CCA-related technique are discussed. Finally, a general guideline in how to select the most appropriate CCA-related technique based on the properties of available data sets and particularly targeted neuroscience questions is provided.
Collapse
Affiliation(s)
- Xiaowei Zhuang
- Cleveland Clinic Lou Ruvo Center for Brain HealthLas VegasNevadaUSA
| | - Zhengshi Yang
- Cleveland Clinic Lou Ruvo Center for Brain HealthLas VegasNevadaUSA
| | - Dietmar Cordes
- Cleveland Clinic Lou Ruvo Center for Brain HealthLas VegasNevadaUSA
- University of ColoradoBoulderColoradoUSA
- Department of Brain HealthUniversity of NevadaLas VegasNevadaUSA
| |
Collapse
|
20
|
Feng J, Zhang SW, Chen L. Identification of Alzheimer's disease based on wavelet transformation energy feature of the structural MRI image and NN classifier. Artif Intell Med 2020; 108:101940. [DOI: 10.1016/j.artmed.2020.101940] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2020] [Revised: 07/01/2020] [Accepted: 08/07/2020] [Indexed: 02/07/2023]
|
21
|
Zhou T, Liu M, Thung KH, Shen D. Latent Representation Learning for Alzheimer's Disease Diagnosis With Incomplete Multi-Modality Neuroimaging and Genetic Data. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:2411-2422. [PMID: 31021792 PMCID: PMC8034601 DOI: 10.1109/tmi.2019.2913158] [Citation(s) in RCA: 82] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
The fusion of complementary information contained in multi-modality data [e.g., magnetic resonance imaging (MRI), positron emission tomography (PET), and genetic data] has advanced the progress of automated Alzheimer's disease (AD) diagnosis. However, multi-modality based AD diagnostic models are often hindered by the missing data, i.e., not all the subjects have complete multi-modality data. One simple solution used by many previous studies is to discard samples with missing modalities. However, this significantly reduces the number of training samples, thus leading to a sub-optimal classification model. Furthermore, when building the classification model, most existing methods simply concatenate features from different modalities into a single feature vector without considering their underlying associations. As features from different modalities are often closely related (e.g., MRI and PET features are extracted from the same brain region), utilizing their inter-modality associations may improve the robustness of the diagnostic model. To this end, we propose a novel latent representation learning method for multi-modality based AD diagnosis. Specifically, we use all the available samples (including samples with incomplete modality data) to learn a latent representation space. Within this space, we not only use samples with complete multi-modality data to learn a common latent representation, but also use samples with incomplete multi-modality data to learn independent modality-specific latent representations. We then project the latent representations to the label space for AD diagnosis. We perform experiments using 737 subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database, and the experimental results verify the effectiveness of our proposed method.
Collapse
Affiliation(s)
- Tao Zhou
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
- Inception Institute of Artificial Intelligence, Abu Dhabi 51133, United Arab Emirates
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Kim-Han Thung
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599 USA
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| |
Collapse
|
22
|
Qadar MA, Aïssa-El-Bey A, Seghouane AK. Two dimensional CCA via penalized matrix decomposition for structure preserved fMRI data analysis. DIGITAL SIGNAL PROCESSING 2019; 92:36-46. [DOI: 10.1016/j.dsp.2019.04.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
23
|
Yang Z, Zhuang X, Bird C, Sreenivasan K, Mishra V, Banks S, Cordes D. Performing Sparse Regularization and Dimension Reduction Simultaneously in Multimodal Data Fusion. Front Neurosci 2019; 13:642. [PMID: 31333396 PMCID: PMC6618346 DOI: 10.3389/fnins.2019.00642] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Accepted: 06/04/2019] [Indexed: 01/28/2023] Open
Abstract
Collecting multiple modalities of neuroimaging data on the same subject is increasingly becoming the norm in clinical practice and research. Fusing multiple modalities to find related patterns is a challenge in neuroimaging analysis. Canonical correlation analysis (CCA) is commonly used as a symmetric data fusion technique to find related patterns among multiple modalities. In CCA-based data fusion, principal component analysis (PCA) is frequently applied as a preprocessing step to reduce data dimension followed by CCA on dimension-reduced data. PCA, however, does not differentiate between informative voxels from non-informative voxels in the dimension reduction step. Sparse PCA (sPCA) extends traditional PCA by adding sparse regularization that assigns zero weights to non-informative voxels. In this study, sPCA is incorporated into CCA-based fusion analysis and applied on neuroimaging data. A cross-validation method is developed and validated to optimize the parameters in sPCA. Different simulations are carried out to evaluate the improvement by introducing sparsity constraint to PCA. Four fusion methods including sPCA+CCA, PCA+CCA, parallel ICA and sparse CCA were applied on structural and functional magnetic resonance imaging data of mild cognitive impairment subjects and normal controls. Our results indicate that sPCA significantly can reduce the impact of non-informative voxels and lead to improved statistical power in uncovering disease-related patterns by a fusion analysis.
Collapse
Affiliation(s)
- Zhengshi Yang
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, United States
| | - Xiaowei Zhuang
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, United States
| | - Christopher Bird
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, United States
| | - Karthik Sreenivasan
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, United States
| | - Virendra Mishra
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, United States
| | - Sarah Banks
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, United States
| | - Dietmar Cordes
- Cleveland Clinic Lou Ruvo Center for Brain Health, Las Vegas, NV, United States
- Departments of Psychology and Neuroscience, University of Colorado, Boulder, CO, United States
| |
Collapse
|
24
|
Abstract
OBJECTIVE Canonical correlation analysis (CCA) is a data-driven method that has been successfully used in functional magnetic resonance imaging (fMRI) data analysis. Standard CCA extracts meaningful information from a pair of data sets by seeking pairs of linear combinations from two sets of variables with maximum pairwise correlation. So far, however, this method has been used without incorporating prior information available for fMRI data. In this paper, we address this issue by proposing a new CCA method named pCCA (for projection CCA). METHODS The proposed method is obtained by projection onto a set of basis vectors that better characterize temporal information in the fMRI data set. A methodology is presented to describe the basis selection process using discrete cosine transform (DCT) basis functions. Employing DCT guides the estimated canonical variates, yielding a more computationally efficient CCA procedure. RESULTS The performance gain of the proposed pCCA algorithm over standard and regularized CCA is illustrated on both simulated and real fMRI datasets from resting state, block paradigm task-related and event-related experiments. The results have shown that the proposed pCCA successfully extracts latent components from the task as well as resting-state datasets with increased specificity of the activated voxels. CONCLUSION In addition to offering a new CCA approach, when applied on fMRI data, the proposed algorithm adapts to variations of brain activity patterns and reveals the functionally connected brain regions. SIGNIFICANCE The proposed method can be seen as a regularized CCA method where regularization is introduced via basis expansion, which has the advantage of enforcing smoothness on canonical components.
Collapse
|
25
|
Liu L, Chen S, Zhang F, Wu FX, Pan Y, Wang J. Deep convolutional neural network for automatically segmenting acute ischemic stroke lesion in multi-modality MRI. Neural Comput Appl 2019. [DOI: 10.1007/s00521-019-04096-x] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
26
|
Mohammadi-Nejad AR, Mahmoudzadeh M, Hassanpour MS, Wallois F, Muzik O, Papadelis C, Hansen A, Soltanian-Zadeh H, Gelovani J, Nasiriavanaki M. Neonatal brain resting-state functional connectivity imaging modalities. PHOTOACOUSTICS 2018; 10:1-19. [PMID: 29511627 PMCID: PMC5832677 DOI: 10.1016/j.pacs.2018.01.003] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2017] [Revised: 01/12/2018] [Accepted: 01/27/2018] [Indexed: 05/12/2023]
Abstract
Infancy is the most critical period in human brain development. Studies demonstrate that subtle brain abnormalities during this state of life may greatly affect the developmental processes of the newborn infants. One of the rapidly developing methods for early characterization of abnormal brain development is functional connectivity of the brain at rest. While the majority of resting-state studies have been conducted using magnetic resonance imaging (MRI), there is clear evidence that resting-state functional connectivity (rs-FC) can also be evaluated using other imaging modalities. The aim of this review is to compare the advantages and limitations of different modalities used for the mapping of infants' brain functional connectivity at rest. In addition, we introduce photoacoustic tomography, a novel functional neuroimaging modality, as a complementary modality for functional mapping of infants' brain.
Collapse
Affiliation(s)
- Ali-Reza Mohammadi-Nejad
- CIPCE, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
- Departments of Radiology and Research Administration, Henry Ford Health System, Detroit, MI, USA
| | - Mahdi Mahmoudzadeh
- INSERM, U1105, Université de Picardie, CURS, F80036, Amiens, France
- INSERM U1105, Exploration Fonctionnelles du Système Nerveux Pédiatrique, South University Hospital, F80054, Amiens Cedex, France
| | | | - Fabrice Wallois
- INSERM, U1105, Université de Picardie, CURS, F80036, Amiens, France
- INSERM U1105, Exploration Fonctionnelles du Système Nerveux Pédiatrique, South University Hospital, F80054, Amiens Cedex, France
| | - Otto Muzik
- Department of Pediatrics, Wayne State University School of Medicine, Detroit, MI, USA
- Department of Radiology, Wayne State University School of Medicine, Detroit, MI, USA
| | - Christos Papadelis
- Boston Children’s Hospital, Department of Medicine, Harvard Medical School, Boston, MA, USA
| | - Anne Hansen
- Boston Children’s Hospital, Department of Medicine, Harvard Medical School, Boston, MA, USA
| | - Hamid Soltanian-Zadeh
- CIPCE, School of Electrical and Computer Engineering, College of Engineering, University of Tehran, Tehran, Iran
- Departments of Radiology and Research Administration, Henry Ford Health System, Detroit, MI, USA
- Department of Radiology, Wayne State University School of Medicine, Detroit, MI, USA
| | - Juri Gelovani
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
- Molecular Imaging Program, Barbara Ann Karmanos Cancer Institute, Wayne State University, Detroit, MI, USA
| | - Mohammadreza Nasiriavanaki
- Department of Biomedical Engineering, Wayne State University, Detroit, MI, USA
- Department of Neurology, Wayne State University School of Medicine, Detroit, MI, USA
- Molecular Imaging Program, Barbara Ann Karmanos Cancer Institute, Wayne State University, Detroit, MI, USA
| |
Collapse
|