1
|
Yuan X, Chen M, Ding P, Gan A, Gong A, Chu Z, Nan W, Fu Y, Cheng Y. Cross-Domain Identification of Multisite Major Depressive Disorder Using End-to-End Brain Dynamic Attention Network. IEEE Trans Neural Syst Rehabil Eng 2024; 32:33-42. [PMID: 38090844 DOI: 10.1109/tnsre.2023.3341923] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/14/2024]
Abstract
Establishing objective and quantitative imaging markers at individual level can assist in accurate diagnosis of Major Depressive Disorder (MDD). However, the clinical heterogeneity of MDD and the shift to multisite data decreased identification accuracy. To address these issues, the Brain Dynamic Attention Network (BDANet) is innovatively proposed, and analyzed bimodal scans from 2055 participants of the Rest-meta-MDD consortium. The end-to-end BDANet contains two crucial components. The Dynamic BrainGraph Generator dynamically focuses and represents topological relationships between Regions of Interest, overcoming limitations of static methods. The Ensemble Classifier is constructed to obfuscate domain sources to achieve inter-domain alignment. Finally, BDANet dynamically generates sample-specific brain graphs by downstream recognition tasks. The proposed BDANet achieved an accuracy of 81.6%. The regions with high attribution for classification were mainly located in the insula, cingulate cortex and auditory cortex. The level of brain connectivity in p24 region was negatively correlated ( [Formula: see text]) with the severity of MDD. Additionally, sex differences in connectivity strength were observed in specific brain regions and functional subnetworks ( [Formula: see text] or [Formula: see text]). These findings based on a large multisite dataset support the conclusion that BDANet can better solve the problem of the clinical heterogeneity of MDD and the shift of multisite data. It also illustrates the potential utility of BDANet for personalized accurate identification, treatment and intervention of MDD.
Collapse
|
2
|
Noh JH, Kim JH, Yang HD. Classification of Alzheimer's Progression Using fMRI Data. SENSORS (BASEL, SWITZERLAND) 2023; 23:6330. [PMID: 37514624 PMCID: PMC10383967 DOI: 10.3390/s23146330] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 06/27/2023] [Accepted: 06/28/2023] [Indexed: 07/30/2023]
Abstract
In the last three decades, the development of functional magnetic resonance imaging (fMRI) has significantly contributed to the understanding of the brain, functional brain mapping, and resting-state brain networks. Given the recent successes of deep learning in various fields, we propose a 3D-CNN-LSTM classification model to diagnose health conditions with the following classes: condition normal (CN), early mild cognitive impairment (EMCI), late mild cognitive impairment (LMCI), and Alzheimer's disease (AD). The proposed method employs spatial and temporal feature extractors, wherein the former utilizes a U-Net architecture to extract spatial features, and the latter utilizes long short-term memory (LSTM) to extract temporal features. Prior to feature extraction, we performed four-step pre-processing to remove noise from the fMRI data. In the comparative experiments, we trained each of the three models by adjusting the time dimension. The network exhibited an average accuracy of 96.4% when using five-fold cross-validation. These results show that the proposed method has high potential for identifying the progression of Alzheimer's by analyzing 4D fMRI data.
Collapse
Affiliation(s)
- Ju-Hyeon Noh
- Department of Computer Engineering, University of Chosun, Gwangju 61452, Republic of Korea
| | - Jun-Hyeok Kim
- Department of Computer Engineering, University of Chosun, Gwangju 61452, Republic of Korea
| | - Hee-Deok Yang
- Department of Computer Engineering, University of Chosun, Gwangju 61452, Republic of Korea
| |
Collapse
|
3
|
Peng L, Wang N, Dvornek N, Zhu X, Li X. FedNI: Federated Graph Learning With Network Inpainting for Population-Based Disease Prediction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2032-2043. [PMID: 35788451 DOI: 10.1109/tmi.2022.3188728] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Graph Convolutional Neural Networks (GCNs) are widely used for graph analysis. Specifically, in medical applications, GCNs can be used for disease prediction on a population graph, where graph nodes represent individuals and edges represent individual similarities. However, GCNs rely on a vast amount of data, which is challenging to collect for a single medical institution. In addition, a critical challenge that most medical institutions continue to face is addressing disease prediction in isolation with incomplete data information. To address these issues, Federated Learning (FL) allows isolated local institutions to collaboratively train a global model without data sharing. In this work, we propose a framework, FedNI, to leverage network inpainting and inter-institutional data via FL. Specifically, we first federatively train missing node and edge predictor using a graph generative adversarial network (GAN) to complete the missing information of local networks. Then we train a global GCN node classifier across institutions using a federated graph learning platform. The novel design enables us to build more accurate machine learning models by leveraging federated learning and also graph learning approaches. We demonstrate that our federated model outperforms local and baseline FL methods with significant margins on two public neuroimaging datasets.
Collapse
|
4
|
Zhao C, Zhan L, Thompson PM, Huang H. Explainable Contrastive Multiview Graph Representation of Brain, Mind, and Behavior. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2022; 13431:356-365. [PMID: 39051030 PMCID: PMC11267032 DOI: 10.1007/978-3-031-16431-6_34] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/27/2024]
Abstract
Understanding the intrinsic patterns of human brain is important to make inferences about the mind and brain-behavior association. Electrophysiological methods (i.e. MEG/EEG) provide direct measures of neural activity without the effect of vascular confounds. The blood oxygenated level-dependent (BOLD) signal of functional MRI (fMRI) reveals the spatial and temporal brain activity across different brain regions. However, it is unclear how to associate the high temporal resolution Electrophysiological measures with high spatial resolution fMRI signals. Here, we present a novel interpretable model for coupling the structure and function activity of brain based on heterogeneous contrastive graph representation. The proposed method is able to link manifest variables of the brain (i.e. MEG, MRI, fMRI and behavior performance) and quantify the intrinsic coupling strength of different modal signals. The proposed method learns the heterogeneous node and graph representations by contrasting the structural and temporal views through the mind to multimodal brain data. The first experiment with 1200 subjects from Human connectome Project (HCP) shows that the proposed method outperforms the existing approaches in predicting individual gender and enabling the location of the importance of brain regions with sex difference. The second experiment associates the structure and temporal views between the low-level sensory regions and high-level cognitive ones. The experimental results demonstrate that the dependence of structural and temporal views varied spatially through different modal variants. The proposed method enables the heterogeneous biomarkers explanation for different brain measurements.
Collapse
Affiliation(s)
- Chongyue Zhao
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Liang Zhan
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Paul M. Thompson
- Imaging Genetics Center, University of Southern California, Los Angeles, CA, USA
| | - Heng Huang
- Department of Electrical and Computer Engineering, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
5
|
Qureshi MB, Azad L, Qureshi MS, Aslam S, Aljarbouh A, Fayaz M. Brain Decoding Using fMRI Images for Multiple Subjects through Deep Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1124927. [PMID: 35273647 PMCID: PMC8904097 DOI: 10.1155/2022/1124927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 02/06/2022] [Accepted: 02/11/2022] [Indexed: 12/02/2022]
Abstract
Substantial information related to human cerebral conditions can be decoded through various noninvasive evaluating techniques like fMRI. Exploration of the neuronal activity of the human brain can divulge the thoughts of a person like what the subject is perceiving, thinking, or visualizing. Furthermore, deep learning techniques can be used to decode the multifaceted patterns of the brain in response to external stimuli. Existing techniques are capable of exploring and classifying the thoughts of the human subject acquired by the fMRI imaging data. fMRI images are the volumetric imaging scans which are highly dimensional as well as require a lot of time for training when fed as an input in the deep learning network. However, the hassle for more efficient learning of highly dimensional high-level features in less training time and accurate interpretation of the brain voxels with less misclassification error is needed. In this research, we propose an improved CNN technique where features will be functionally aligned. The optimal features will be selected after dimensionality reduction. The highly dimensional feature vector will be transformed into low dimensional space for dimensionality reduction through autoadjusted weights and combination of best activation functions. Furthermore, we solve the problem of increased training time by using Swish activation function, making it denser and increasing efficiency of the model in less training time. Finally, the experimental results are evaluated and compared with other classifiers which demonstrated the supremacy of the proposed model in terms of accuracy.
Collapse
Affiliation(s)
- Muhammad Bilal Qureshi
- Department of Computer Science & IT, University of Lakki Marwat, Lakki Marwat 28420, KPK, Pakistan
| | - Laraib Azad
- Department of Computer Science, Shaheed Zulfikar Ali Bhutto Institute of Science and Technology, Islamabad 44000, Pakistan
| | - Muhammad Shuaib Qureshi
- Department of Computer Science, School of Arts and Sciences, University of Central Asia, Kyrgyzstan
| | - Sheraz Aslam
- Department of Electrical Engineering, Computer Engineering, and Informatics, Cyprus University of Technology, Cyprus
| | - Ayman Aljarbouh
- Department of Computer Science, School of Arts and Sciences, University of Central Asia, Kyrgyzstan
| | - Muhammad Fayaz
- Department of Computer Science, School of Arts and Sciences, University of Central Asia, Kyrgyzstan
| |
Collapse
|
6
|
Jiang Z, Wang Y, Shi C, Wu Y, Hu R, Chen S, Hu S, Wang X, Qiu B. Attention module improves both performance and interpretability of four-dimensional functional magnetic resonance imaging decoding neural network. Hum Brain Mapp 2022; 43:2683-2692. [PMID: 35212436 PMCID: PMC9057093 DOI: 10.1002/hbm.25813] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 01/29/2022] [Accepted: 02/09/2022] [Indexed: 11/15/2022] Open
Abstract
Decoding brain cognitive states from neuroimaging signals is an important topic in neuroscience. In recent years, deep neural networks (DNNs) have been recruited for multiple brain state decoding and achieved good performance. However, the open question of how to interpret the DNN black box remains unanswered. Capitalizing on advances in machine learning, we integrated attention modules into brain decoders to facilitate an in‐depth interpretation of DNN channels. A four‐dimensional (4D) convolution operation was also included to extract temporo‐spatial interaction within the fMRI signal. The experiments showed that the proposed model obtains a very high accuracy (97.4%) and outperforms previous researches on the seven different task benchmarks from the Human Connectome Project (HCP) dataset. The visualization analysis further illustrated the hierarchical emergence of task‐specific masks with depth. Finally, the model was retrained to regress individual traits within the HCP and to classify viewing images from the BOLD5000 dataset, respectively. Transfer learning also achieves good performance. Further visualization analysis shows that, after transfer learning, low‐level attention masks remained similar to the source domain, whereas high‐level attention masks changed adaptively. In conclusion, the proposed 4D model with attention module performed well and facilitated interpretation of DNNs, which is helpful for subsequent research.
Collapse
Affiliation(s)
- Zhoufan Jiang
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui, China
| | - Yanming Wang
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui, China
| | - ChenWei Shi
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui, China
| | - Yueyang Wu
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui, China
| | - Rongjie Hu
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui, China
| | - Shishuo Chen
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui, China
| | - Sheng Hu
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui, China
| | - Xiaoxiao Wang
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui, China.,Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui, China
| | - Bensheng Qiu
- Center for Biomedical Imaging, University of Science and Technology of China, Hefei, Anhui, China.,Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, Anhui, China
| |
Collapse
|
7
|
Du B, Cheng X, Duan Y, Ning H. fMRI Brain Decoding and Its Applications in Brain-Computer Interface: A Survey. Brain Sci 2022; 12:228. [PMID: 35203991 PMCID: PMC8869956 DOI: 10.3390/brainsci12020228] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Revised: 01/29/2022] [Accepted: 01/30/2022] [Indexed: 11/25/2022] Open
Abstract
Brain neural activity decoding is an important branch of neuroscience research and a key technology for the brain-computer interface (BCI). Researchers initially developed simple linear models and machine learning algorithms to classify and recognize brain activities. With the great success of deep learning on image recognition and generation, deep neural networks (DNN) have been engaged in reconstructing visual stimuli from human brain activity via functional magnetic resonance imaging (fMRI). In this paper, we reviewed the brain activity decoding models based on machine learning and deep learning algorithms. Specifically, we focused on current brain activity decoding models with high attention: variational auto-encoder (VAE), generative confrontation network (GAN), and the graph convolutional network (GCN). Furthermore, brain neural-activity-decoding-enabled fMRI-based BCI applications in mental and psychological disease treatment are presented to illustrate the positive correlation between brain decoding and BCI. Finally, existing challenges and future research directions are addressed.
Collapse
Affiliation(s)
- Bing Du
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; (B.D.); (X.C.)
| | - Xiaomu Cheng
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; (B.D.); (X.C.)
| | - Yiping Duan
- Department of Electronic Engineering, Tsinghua University, Beijing 100084, China;
| | - Huansheng Ning
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; (B.D.); (X.C.)
| |
Collapse
|
8
|
Mittal A, Aggarwal P, Pessoa L, Gupta A. Robust Brain State Decoding using Bidirectional Long Short Term Memory Networks in functional MRI. PROCEEDINGS. INDIAN CONFERENCE ON COMPUTER VISION, GRAPHICS & IMAGE PROCESSING 2021; 2021:12. [PMID: 36350798 PMCID: PMC9639335 DOI: 10.1145/3490035.3490269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Decoding brain states of the underlying cognitive processes via learning discriminative feature representations has recently gained a lot of interest in brain imaging studies. Particularly, there has been an impetus to encode the dynamics of brain functioning by analyzing temporal information available in the fMRI data. Long-short term memory (LSTM), a class of machine learning model possessing a "memory" component, to retain previously seen temporal information, is increasingly being observed to perform well in various applications with dynamic temporal behavior, including brain state decoding. Because of the dynamics and inherent latency in fMRI BOLD responses, future temporal context is crucial. However, it is neither encoded nor captured by the conventional LSTM model. This paper performs robust brain state decoding via information encapsulation from both the past and future instances of fMRI data via bi-directional LSTM. This allows for explicitly modeling the dynamics of BOLD response without any delay adjustment. To this end, we utilize a bidirectional LSTM, wherein, the input sequence is fed in normal time-order for one LSTM network, and in the reverse time-order, for another. The two hidden activations of forward and reverse directions in bi-LSTM are collated to build the "memory" of the model and are used to robustly predict the brain states at every time instance. Working memory data from the Human Connectome Project (HCP) is utilized for validation and was observed to perform 18% better than it's unidirectional counterpart in terms of accuracy in predicting the brain states.
Collapse
Affiliation(s)
| | | | - Luiz Pessoa
- Laboratory of Cognition and Emotion, University of Maryland, USA
| | | |
Collapse
|
9
|
Design of Deep Learning Model for Task-Evoked fMRI Data Classification. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:6660866. [PMID: 34422034 PMCID: PMC8378948 DOI: 10.1155/2021/6660866] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2020] [Revised: 05/26/2021] [Accepted: 07/15/2021] [Indexed: 11/25/2022]
Abstract
Machine learning methods have been successfully applied to neuroimaging signals, one of which is to decode specific task states from functional magnetic resonance imaging (fMRI) data. In this paper, we propose a model that simultaneously utilizes characteristics of both spatial and temporal sequential information of fMRI data with deep neural networks to classify the fMRI task states. We designed a convolution network module and a recurrent network module to extract the spatial and temporal features of fMRI data, respectively. In particular, we also add the attention mechanism to the recurrent network module, which more effectively highlights the brain activation state at the moment of reaction. We evaluated the model using task-evoked fMRI data from the Human Connectome Project (HCP) dataset, the classification accuracy got 94.31%, and the experimental results have shown that the model can effectively distinguish the brain states under different task stimuli.
Collapse
|
10
|
Sobczak F, He Y, Sejnowski TJ, Yu X. Predicting the fMRI Signal Fluctuation with Recurrent Neural Networks Trained on Vascular Network Dynamics. Cereb Cortex 2021; 31:826-844. [PMID: 32940658 PMCID: PMC7906791 DOI: 10.1093/cercor/bhaa260] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 07/19/2020] [Accepted: 08/12/2020] [Indexed: 02/06/2023] Open
Abstract
Resting-state functional MRI (rs-fMRI) studies have revealed specific low-frequency hemodynamic signal fluctuations (<0.1 Hz) in the brain, which could be related to neuronal oscillations through the neurovascular coupling mechanism. Given the vascular origin of the fMRI signal, it remains challenging to separate the neural correlates of global rs-fMRI signal fluctuations from other confounding sources. However, the slow-oscillation detected from individual vessels by single-vessel fMRI presents strong correlation to neural oscillations. Here, we use recurrent neural networks (RNNs) to predict the future temporal evolution of the rs-fMRI slow oscillation from both rodent and human brains. The RNNs trained with vessel-specific rs-fMRI signals encode the unique brain oscillatory dynamic feature, presenting more effective prediction than the conventional autoregressive model. This RNN-based predictive modeling of rs-fMRI datasets from the Human Connectome Project (HCP) reveals brain state-specific characteristics, demonstrating an inverse relationship between the global rs-fMRI signal fluctuation with the internal default-mode network (DMN) correlation. The RNN prediction method presents a unique data-driven encoding scheme to specify potential brain state differences based on the global fMRI signal fluctuation, but not solely dependent on the global variance.
Collapse
Affiliation(s)
- Filip Sobczak
- Translational Neuroimaging and Neural Control Group, High Field Magnetic Resonance Department, Max Planck Institute for Biological Cybernetics, 72076 Tuebingen, Germany
- Graduate Training Centre of Neuroscience, International Max Planck Research School, University of Tuebingen, 72074 Tuebingen, Germany
| | - Yi He
- Translational Neuroimaging and Neural Control Group, High Field Magnetic Resonance Department, Max Planck Institute for Biological Cybernetics, 72076 Tuebingen, Germany
- Danish Research Centre for Magnetic Resonance, 2650, Hvidovre, Denmark
| | - Terrence J Sejnowski
- Howard Hughes Medical Institute, Computational Neurobiology Laboratory, Salk Institute for Biological Studies, La Jolla, CA 92037, USA
- Division of Biological Sciences, University of California, San Diego, La Jolla, CA 92093, USA
| | - Xin Yu
- Translational Neuroimaging and Neural Control Group, High Field Magnetic Resonance Department, Max Planck Institute for Biological Cybernetics, 72076 Tuebingen, Germany
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA 02129, USA
| |
Collapse
|
11
|
Gadgil S, Zhao Q, Pfefferbaum A, Sullivan EV, Adeli E, Pohl KM. Spatio-Temporal Graph Convolution for Resting-State fMRI Analysis. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12267:528-538. [PMID: 33257918 PMCID: PMC7700758 DOI: 10.1007/978-3-030-59728-3_52] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The Blood-Oxygen-Level-Dependent (BOLD) signal of resting-state fMRI (rs-fMRI) records the temporal dynamics of intrinsic functional networks in the brain. However, existing deep learning methods applied to rs-fMRI either neglect the functional dependency between different brain regions in a network or discard the information in the temporal dynamics of brain activity. To overcome those shortcomings, we propose to formulate functional connectivity networks within the context of spatio-temporal graphs. We train a spatio-temporal graph convolutional network (ST-GCN) on short sub-sequences of the BOLD time series to model the non-stationary nature of functional connectivity. Simultaneously, the model learns the importance of graph edges within ST-GCN to gain insight into the functional connectivities contributing to the prediction. In analyzing the rs-fMRI of the Human Connectome Project (HCP, N = 1,091) and the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA, N = 773), ST-GCN is significantly more accurate than common approaches in predicting gender and age based on BOLD signals. Furthermore, the brain regions and functional connections significantly contributing to the predictions of our model are important markers according to the neuroscience literature.
Collapse
Affiliation(s)
- Soham Gadgil
- Computer Science Department, Stanford University, Stanford, USA
| | - Qingyu Zhao
- School of Medicine, Stanford University, Stanford, USA
| | - Adolf Pfefferbaum
- School of Medicine, Stanford University, Stanford, USA
- Center of Health Sciences, SRI International, Menlo Park, USA
| | | | - Ehsan Adeli
- Computer Science Department, Stanford University, Stanford, USA
- School of Medicine, Stanford University, Stanford, USA
| | - Kilian M Pohl
- School of Medicine, Stanford University, Stanford, USA
- Center of Health Sciences, SRI International, Menlo Park, USA
| |
Collapse
|
12
|
A 3D Convolutional Encapsulated Long Short-Term Memory (3DConv-LSTM) Model for Denoising fMRI Data. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2020; 12267:479-488. [PMID: 33251531 DOI: 10.1007/978-3-030-59728-3_47] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Function magnetic resonance imaging (fMRI) data are typically contaminated by noise introduced by head motion, physiological noise, and thermal noise. To mitigate noise artifact in fMRI data, a variety of denoising methods have been developed by removing noise factors derived from the whole time series of fMRI data and therefore are not applicable to real-time fMRI data analysis. In the present study, we develop a generally applicable, deep learning based fMRI denoising method to generate noise-free realistic individual fMRI volumes (time points). Particularly, we develop a fully data-driven 3D convolutional encapsulated Long Short-Term Memory (3DConv-LSTM) approach to generate noise-free fMRI volumes regularized by an adversarial network that makes the generated fMRI volumes more realistic by fooling a critic network. The 3DConv-LSTM model also integrates a gate-controlled self-attention model to memorize short-term dependency and historical information within a memory pool. We have evaluated our method based on both task and resting state fMRI data. Both qualitative and quantitative results have demonstrated that the proposed method outperformed state-of-the-art alternative deep learning methods.
Collapse
|
13
|
Davatzikos C, Sotiras A, Fan Y, Habes M, Erus G, Rathore S, Bakas S, Chitalia R, Gastounioti A, Kontos D. Precision diagnostics based on machine learning-derived imaging signatures. Magn Reson Imaging 2019; 64:49-61. [PMID: 31071473 PMCID: PMC6832825 DOI: 10.1016/j.mri.2019.04.012] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Revised: 04/24/2019] [Accepted: 04/29/2019] [Indexed: 01/08/2023]
Abstract
The complexity of modern multi-parametric MRI has increasingly challenged conventional interpretations of such images. Machine learning has emerged as a powerful approach to integrating diverse and complex imaging data into signatures of diagnostic and predictive value. It has also allowed us to progress from group comparisons to imaging biomarkers that offer value on an individual basis. We review several directions of research around this topic, emphasizing the use of machine learning in personalized predictions of clinical outcome, in breaking down broad umbrella diagnostic categories into more detailed and precise subtypes, and in non-invasively estimating cancer molecular characteristics. These methods and studies contribute to the field of precision medicine, by introducing more specific diagnostic and predictive biomarkers of clinical outcome, therefore pointing to better matching of treatments to patients.
Collapse
Affiliation(s)
- Christos Davatzikos
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America.
| | - Aristeidis Sotiras
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Mohamad Habes
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Guray Erus
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Saima Rathore
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Spyridon Bakas
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Rhea Chitalia
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Aimilia Gastounioti
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| | - Despina Kontos
- Center for Biomedical Image Computing and Analytics, University of Pennsylvania, United States of America
| |
Collapse
|
14
|
Li H, Fan Y. Interpretable, highly accurate brain decoding of subtly distinct brain states from functional MRI using intrinsic functional networks and long short-term memory recurrent neural networks. Neuroimage 2019; 202:116059. [PMID: 31362049 PMCID: PMC6819260 DOI: 10.1016/j.neuroimage.2019.116059] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2019] [Revised: 07/25/2019] [Accepted: 07/26/2019] [Indexed: 11/17/2022] Open
Abstract
Decoding brain functional states underlying cognitive processes from functional MRI (fMRI) data using multivariate pattern analysis (MVPA) techniques has achieved promising performance for characterizing brain activation patterns and providing neurofeedback signals. However, it remains challenging to decode subtly distinct brain states for individual fMRI data points due to varying temporal durations and dependency among different cognitive processes. In this study, we develop a deep learning based framework for brain decoding by leveraging recent advances in intrinsic functional network modeling and sequence modeling using long short-term memory (LSTM) recurrent neural networks (RNNs). Particularly, subject-specific intrinsic functional networks (FNs) are computed from resting-state fMRI data and are used to characterize functional signals of task fMRI data with a compact representation for building brain decoding models, and LSTM RNNs are adopted to learn brain decoding mappings between functional profiles and brain states. Validation results on fMRI data from the HCP dataset have demonstrated that brain decoding models built on training data using the proposed method could learn discriminative latent feature representations and effectively distinguish subtly distinct working memory tasks of different subjects with significantly higher accuracy than conventional decoding models. Informative FNs of the brain decoding models identified as brain activation patterns of working memory tasks were largely consistent with the literature. The method also obtained promising decoding performance on motor and social cognition tasks. Our results suggest that LSTM RNNs in conjunction with FNs could build interpretable, highly accurate brain decoding models.
Collapse
Affiliation(s)
- Hongming Li
- Center for Biomedical Image Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yong Fan
- Center for Biomedical Image Computing and Analytics, Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.
| |
Collapse
|
15
|
Dvornek NC, Li X, Zhuang J, Duncan JS. Jointly Discriminative and Generative Recurrent Neural Networks for Learning from fMRI. MACHINE LEARNING IN MEDICAL IMAGING. MLMI (WORKSHOP) 2019; 11861:382-390. [PMID: 32274470 DOI: 10.1007/978-3-030-32692-0_44] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Recurrent neural networks (RNNs) were designed for dealing with time-series data and have recently been used for creating predictive models from functional magnetic resonance imaging (fMRI) data. However, gathering large fMRI datasets for learning is a difficult task. Furthermore, network interpretability is unclear. To address these issues, we utilize multitask learning and design a novel RNN-based model that learns to discriminate between classes while simultaneously learning to generate the fMRI time-series data. Employing the long short-term memory (LSTM) structure, we develop a discriminative model based on the hidden state and a generative model based on the cell state. The addition of the generative model constrains the network to learn functional communities represented by the LSTM nodes that are both consistent with the data generation as well as useful for the classification task. We apply our approach to the classification of subjects with autism vs. healthy controls using several datasets from the Autism Brain Imaging Data Exchange. Experiments show that our jointly discriminative and generative model improves classification learning while also producing robust and meaningful functional communities for better model understanding.
Collapse
Affiliation(s)
- Nicha C Dvornek
- Department of Radiology & Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
| | - Xiaoxiao Li
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - Juntang Zhuang
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
| | - James S Duncan
- Department of Radiology & Biomedical Imaging, Yale School of Medicine, New Haven, CT, USA
- Department of Biomedical Engineering, Yale University, New Haven, CT, USA
- Department of Electrical Engineering, Yale University, New Haven, CT, USA
- Department of Statistics and Data Science, Yale University, New Haven, CT, USA
| |
Collapse
|
16
|
Gao Y, Zhang Y, Cao Z, Guo X, Zhang J. Decoding Brain States From fMRI Signals by Using Unsupervised Domain Adaptation. IEEE J Biomed Health Inform 2019; 24:1677-1685. [PMID: 31514162 DOI: 10.1109/jbhi.2019.2940695] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
With the development of deep learning in medical image analysis, decoding brain states from functional magnetic resonance imaging (fMRI) signals has made significant progress. Previous studies often utilized deep neural networks to automatically classify brain activity patterns related to diverse cognitive states. However, due to the individual differences between subjects and the variation in acquisition parameters across devices, the inconsistency in data distributions degrades the performance of cross-subject decoding. Besides, most current networks were trained in a supervised way, which is not suitable for the actual scenarios in which massive amounts of data are unlabeled. To address these problems, we proposed the deep cross-subject adaptation decoding (DCAD) framework to decipher the brain states. The proposed volume-based 3D feature extraction architecture can automatically learn the common spatiotemporal features of labeled source data to generate a distinct descriptor. Then, the distance between the source and target distributions is minimized via an unsupervised domain adaptation (UDA) method, which can help to accurately decode the cognitive states across subjects. The performance of the DCAD was evaluated on task-fMRI (tfMRI) dataset from the Human Connectome Project (HCP). Experimental results showed that the proposed method achieved the state-of-the-art decoding performance with mean 81.9% and 84.9% accuracies under two conditions (4 brain states and 9 brain states respectively) of working memory task. Our findings also demonstrated that UDA can mitigate the impact of the data distribution shift, thereby providing a superior choice for increasing the performance of cross-subject decoding without depending on annotations.
Collapse
|
17
|
Yan W, Calhoun V, Song M, Cui Y, Yan H, Liu S, Fan L, Zuo N, Yang Z, Xu K, Yan J, Lv L, Chen J, Chen Y, Guo H, Li P, Lu L, Wan P, Wang H, Wang H, Yang Y, Zhang H, Zhang D, Jiang T, Sui J. Discriminating schizophrenia using recurrent neural network applied on time courses of multi-site FMRI data. EBioMedicine 2019; 47:543-552. [PMID: 31420302 PMCID: PMC6796503 DOI: 10.1016/j.ebiom.2019.08.023] [Citation(s) in RCA: 100] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Revised: 08/09/2019] [Accepted: 08/09/2019] [Indexed: 11/22/2022] Open
Abstract
BACKGROUND Current fMRI-based classification approaches mostly use functional connectivity or spatial maps as input, instead of exploring the dynamic time courses directly, which does not leverage the full temporal information. METHODS Motivated by the ability of recurrent neural networks (RNN) in capturing dynamic information of time sequences, we propose a multi-scale RNN model, which enables classification between 558 schizophrenia and 542 healthy controls by using time courses of fMRI independent components (ICs) directly. To increase interpretability, we also propose a leave-one-IC-out looping strategy for estimating the top contributing ICs. FINDINGS Accuracies of 83·2% and 80·2% were obtained respectively for the multi-site pooling and leave-one-site-out transfer classification. Subsequently, dorsal striatum and cerebellum components contribute the top two group-discriminative time courses, which is true even when adopting different brain atlases to extract time series. INTERPRETATION This is the first attempt to apply a multi-scale RNN model directly on fMRI time courses for classification of mental disorders, and shows the potential for multi-scale RNN-based neuroimaging classifications. FUND: Natural Science Foundation of China, the Strategic Priority Research Program of the Chinese Academy of Sciences, National Institutes of Health Grants, National Science Foundation.
Collapse
Affiliation(s)
- Weizheng Yan
- National Laboratory of Pattern Recognition and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Vince Calhoun
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS) Center, Atlanta 30303, GA, USA
| | - Ming Song
- National Laboratory of Pattern Recognition and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yue Cui
- National Laboratory of Pattern Recognition and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Hao Yan
- Peking University Sixth Hospital, Institute of Mental Health, Beijing 100191, China; Key Laboratory of Mental Health, Ministry of Health, Peking University, Beijing 100191, China
| | - Shengfeng Liu
- National Laboratory of Pattern Recognition and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Lingzhong Fan
- National Laboratory of Pattern Recognition and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Nianming Zuo
- National Laboratory of Pattern Recognition and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zhengyi Yang
- National Laboratory of Pattern Recognition and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Kaibin Xu
- National Laboratory of Pattern Recognition and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jun Yan
- Peking University Sixth Hospital, Institute of Mental Health, Beijing 100191, China; Key Laboratory of Mental Health, Ministry of Health, Peking University, Beijing 100191, China
| | - Luxian Lv
- Department of Psychiatry, Henan Mental Hospital, The Second Affiliated Hospital of Xinxiang Medical University, Xinxiang 453002, Henan, China; Henan Key Lab of Biological Psychiatry, Xinxiang Medical University, Xinxiang 453002, Henan, China
| | - Jun Chen
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, Hubei, China
| | - Yunchun Chen
- Department of Psychiatry, Xijing Hospital, The Fourth Military Medical University, Xi'an 710032, Shaanxi, China
| | - Hua Guo
- Zhumadian Psychiatric Hospital, Zhumadian 463000, Henan, China
| | - Peng Li
- Peking University Sixth Hospital, Institute of Mental Health, Beijing 100191, China; Key Laboratory of Mental Health, Ministry of Health, Peking University, Beijing 100191, China
| | - Lin Lu
- Peking University Sixth Hospital, Institute of Mental Health, Beijing 100191, China; Key Laboratory of Mental Health, Ministry of Health, Peking University, Beijing 100191, China
| | - Ping Wan
- Zhumadian Psychiatric Hospital, Zhumadian 463000, Henan, China
| | - Huaning Wang
- Department of Psychiatry, Xijing Hospital, The Fourth Military Medical University, Xi'an 710032, Shaanxi, China
| | - Huiling Wang
- Department of Radiology, Renmin Hospital of Wuhan University, Wuhan 430060, Hubei, China
| | - Yongfeng Yang
- Department of Psychiatry, Henan Mental Hospital, The Second Affiliated Hospital of Xinxiang Medical University, Xinxiang 453002, Henan, China; Henan Key Lab of Biological Psychiatry, Xinxiang Medical University, Xinxiang 453002, Henan, China; Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, Sichuan, China
| | - Hongxing Zhang
- Department of Psychiatry, Henan Mental Hospital, The Second Affiliated Hospital of Xinxiang Medical University, Xinxiang 453002, Henan, China; Department of Psychology, Xinxiang Medical University, Xinxiang 453002, Henan, China
| | - Dai Zhang
- Peking University Sixth Hospital, Institute of Mental Health, Beijing 100191, China; Key Laboratory of Mental Health, Ministry of Health, Peking University, Beijing 100191, China; Center for Life Sciences/PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Tianzi Jiang
- National Laboratory of Pattern Recognition and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China; Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 610054, Sichuan, China; Queensland Brain Institute, University of Queensland, Brisbane 4072, QLD, Australia; CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| | - Jing Sui
- National Laboratory of Pattern Recognition and Brainnetome Center, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China; University of Chinese Academy of Sciences, Beijing 100049, China; CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China.
| |
Collapse
|
18
|
Li H, Fan Y. EARLY PREDICTION OF ALZHEIMER'S DISEASE DEMENTIA BASED ON BASELINE HIPPOCAMPAL MRI AND 1-YEAR FOLLOW-UP COGNITIVE MEASURES USING DEEP RECURRENT NEURAL NETWORKS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2019; 2019:368-371. [PMID: 31803346 DOI: 10.1109/isbi.2019.8759397] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Multi-modal biological, imaging, and neuropsychological markers have demonstrated promising performance for distinguishing Alzheimer's disease (AD) patients from cognitively normal elders. However, it remains difficult to early predict when and which mild cognitive impairment (MCI) individuals will convert to AD dementia. Informed by pattern classification studies which have demonstrated that pattern classifiers built on longitudinal data could achieve better classification performance than those built on cross-sectional data, we develop a deep learning model based on recurrent neural networks (RNNs) to learn informative representation and temporal dynamics of longitudinal cognitive measures of individual subjects and combine them with baseline hippocampal MRI for building a prognostic model of AD dementia progression. Experimental results on a large cohort of MCI subjects have demonstrated that the deep learning model could learn informative measures from longitudinal data for characterizing the progression of MCI subjects to AD dementia, and the prognostic model could early predict AD progression with high accuracy.
Collapse
Affiliation(s)
- Hongming Li
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | | |
Collapse
|
19
|
Li H, Fan Y. EARLY PREDICTION OF ALZHEIMER'S DISEASE DEMENTIA BASED ON BASELINE HIPPOCAMPAL MRI AND 1-YEAR FOLLOW-UP COGNITIVE MEASURES USING DEEP RECURRENT NEURAL NETWORKS. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2019; 2019:368-371. [PMID: 31803346 PMCID: PMC6892161 DOI: 10.1109/isbi.2019.8759397 10.1109/isbi.2019.8759397] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/28/2023]
Abstract
Multi-modal biological, imaging, and neuropsychological markers have demonstrated promising performance for distinguishing Alzheimer's disease (AD) patients from cognitively normal elders. However, it remains difficult to early predict when and which mild cognitive impairment (MCI) individuals will convert to AD dementia. Informed by pattern classification studies which have demonstrated that pattern classifiers built on longitudinal data could achieve better classification performance than those built on cross-sectional data, we develop a deep learning model based on recurrent neural networks (RNNs) to learn informative representation and temporal dynamics of longitudinal cognitive measures of individual subjects and combine them with baseline hippocampal MRI for building a prognostic model of AD dementia progression. Experimental results on a large cohort of MCI subjects have demonstrated that the deep learning model could learn informative measures from longitudinal data for characterizing the progression of MCI subjects to AD dementia, and the prognostic model could early predict AD progression with high accuracy.
Collapse
Affiliation(s)
- Hongming Li
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yong Fan
- Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|