1
|
Jung W, Jeon E, Kang E, Suk HI. EAG-RS: A Novel Explainability-Guided ROI-Selection Framework for ASD Diagnosis via Inter-Regional Relation Learning. IEEE Trans Med Imaging 2024; 43:1400-1411. [PMID: 38015693 DOI: 10.1109/tmi.2023.3337362] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
Deep learning models based on resting-state functional magnetic resonance imaging (rs-fMRI) have been widely used to diagnose brain diseases, particularly autism spectrum disorder (ASD). Existing studies have leveraged the functional connectivity (FC) of rs-fMRI, achieving notable classification performance. However, they have significant limitations, including the lack of adequate information while using linear low-order FC as inputs to the model, not considering individual characteristics (i.e., different symptoms or varying stages of severity) among patients with ASD, and the non-explainability of the decision process. To cover these limitations, we propose a novel explainability-guided region of interest (ROI) selection (EAG-RS) framework that identifies non-linear high-order functional associations among brain regions by leveraging an explainable artificial intelligence technique and selects class-discriminative regions for brain disease identification. The proposed framework includes three steps: (i) inter-regional relation learning to estimate non-linear relations through random seed-based network masking, (ii) explainable connection-wise relevance score estimation to explore high-order relations between functional connections, and (iii) non-linear high-order FC-based diagnosis-informative ROI selection and classifier learning to identify ASD. We validated the effectiveness of our proposed method by conducting experiments using the Autism Brain Imaging Database Exchange (ABIDE) dataset, demonstrating that the proposed method outperforms other comparative methods in terms of various evaluation metrics. Furthermore, we qualitatively analyzed the selected ROIs and identified ASD subtypes linked to previous neuroscientific studies.
Collapse
|
2
|
Jeong S, Ko W, Mulyadi AW, Suk HI. Deep Efficient Continuous Manifold Learning for Time Series Modeling. IEEE Trans Pattern Anal Mach Intell 2024; 46:171-184. [PMID: 37768794 DOI: 10.1109/tpami.2023.3320125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/30/2023]
Abstract
Modeling non-euclidean data is drawing extensive attention along with the unprecedented successes of deep neural networks in diverse fields. Particularly, a symmetric positive definite matrix is being actively studied in computer vision, signal processing, and medical image analysis, due to its ability to learn beneficial statistical representations. However, owing to its rigid constraints, it remains challenging to optimization problems and inefficient computational costs, especially, when incorporating it with a deep learning framework. In this paper, we propose a framework to exploit a diffeomorphism mapping between Riemannian manifolds and a Cholesky space, by which it becomes feasible not only to efficiently solve optimization problems but also to greatly reduce computation costs. Further, for dynamic modeling of time-series data, we devise a continuous manifold learning method by systematically integrating a manifold ordinary differential equation and a gated recurrent neural network. It is worth noting that due to the nice parameterization of matrices in a Cholesky space, training our proposed network equipped with Riemannian geometric metrics is straightforward. We demonstrate through experiments over regular and irregular time-series datasets that our proposed model can be efficiently and reliably trained and outperforms existing manifold methods and state-of-the-art methods in various time-series tasks.
Collapse
|
3
|
Sohn J, Jeon E, Jung W, Kang E, Suk HI. Module of Axis-based Nexus Attention for weakly supervised object localization. Sci Rep 2023; 13:18588. [PMID: 37903879 PMCID: PMC10616293 DOI: 10.1038/s41598-023-45796-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 10/24/2023] [Indexed: 11/01/2023] Open
Abstract
Weakly supervised object localization tasks remain challenging to identify and segment an entire object rather than only discriminative parts of the object. To tackle this problem, corruption-based approaches have been devised, which involve the training of non-discriminative regions by corrupting (e.g., erasing) the input images or intermediate feature maps. However, this approach requires an additional hyperparameter, the corrupting threshold, to determine the degree of corruption and can unfavorably disrupt training. It also tends to localize object regions coarsely. In this paper, we propose a novel approach, Module of Axis-based Nexus Attention (MoANA), which helps to adaptively activate less discriminative regions along with the class-discriminative regions without an additional hyperparameter, and elaborately localizes an entire object. Specifically, MoANA consists of three mechanisms (1) triple-view attentions representation, (2) attentions expansion, and (3) features calibration mechanism. Unlike other attention-based methods that train a coarse attention map with the same values across elements in feature maps, MoANA trains fine-grained values in an attention map by assigning different attention values to each element. We validated MoANA by comparing it with various methods. We also analyzed the effect of each component in MoANA and visualized attention maps to provide insights into the calibration.
Collapse
Affiliation(s)
- Junghyo Sohn
- Department of Artificial Intelligence, Korea University, Seoul, 02841, Republic of Korea
| | - Eunjin Jeon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Wonsik Jung
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Eunsong Kang
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Heung-Il Suk
- Department of Artificial Intelligence, Korea University, Seoul, 02841, Republic of Korea.
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
4
|
Jun E, Jeong S, Heo DW, Suk HI. Medical Transformer: Universal Encoder for 3-D Brain MRI Analysis. IEEE Trans Neural Netw Learn Syst 2023; PP:1-11. [PMID: 37738193 DOI: 10.1109/tnnls.2023.3308712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/24/2023]
Abstract
Transfer learning has attracted considerable attention in medical image analysis because of the limited number of annotated 3-D medical datasets available for training data-driven deep learning models in the real world. We propose Medical Transformer, a novel transfer learning framework that effectively models 3-D volumetric images as a sequence of 2-D image slices. To improve the high-level representation in 3-D-form empowering spatial relations, we use a multiview approach that leverages information from three planes of the 3-D volume, while providing parameter-efficient training. For building a source model generally applicable to various tasks, we pretrain the model using self-supervised learning (SSL) for masked encoding vector prediction as a proxy task, using a large-scale normal, healthy brain magnetic resonance imaging (MRI) dataset. Our pretrained model is evaluated on three downstream tasks: 1) brain disease diagnosis; 2) brain age prediction; and 3) brain tumor segmentation, which are widely studied in brain MRI research. Experimental results demonstrate that our Medical Transformer outperforms the state-of-the-art (SOTA) transfer learning methods, efficiently reducing the number of parameters by up to approximately 92% for classification and regression tasks and 97% for segmentation task, and it also achieves good performance in scenarios where only partial training samples are used.
Collapse
|
5
|
Lee J, Kang E, Heo DW, Suk HI. Site-Invariant Meta-Modulation Learning for Multisite Autism Spectrum Disorders Diagnosis. IEEE Trans Neural Netw Learn Syst 2023; PP:1-14. [PMID: 37708014 DOI: 10.1109/tnnls.2023.3311195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/16/2023]
Abstract
Large amounts of fMRI data are essential to building generalized predictive models for brain disease diagnosis. In order to conduct extensive data analysis, it is often necessary to gather data from multiple organizations. However, the site variation inherent in multisite resting-state functional magnetic resonance imaging (rs-fMRI) leads to unfavorable heterogeneity in data distribution, negatively impacting the identification of biomarkers and the diagnostic decision. Several existing methods have alleviated this shift of domain distribution (i.e., multisite problem). Statistical tuning schemes directly regress out site disparity factors from the data prior to model training. Such methods have a limitation in processing data each time through variance estimation according to the added site. In the model adjustment approaches, domain adaptation (DA) methods adjust the features or models of the source domain according to the target domain during model training. Thus, it is inevitable that it needs updating model parameters according to the samples of a target site, causing great limitations in practical applicability. Meanwhile, the approach of domain generalization (DG) aims to create a universal model that can be quickly adapted to multiple domains. In this study, we propose a novel framework for disease diagnosis that alleviates the multisite problem by adaptively calibrating site-specific features into site-invariant features. Specifically, it applies directly to samples from unseen sites without the need for fine-tuning. With a learning-to-learn strategy that learns how to calibrate the features under the various domain shift environments, our novel modulation mechanism extracts site-invariant features. In our experiments over the Autism Brain Imaging Data Exchange (ABIDE I and II) dataset, we validated the generalization ability of the proposed network by improving diagnostic accuracy in both seen and unseen multisite samples.
Collapse
|
6
|
Choi JY, Lee SS, Kim NY, Park HJ, Sung YS, Lee Y, Yoon JS, Suk HI. The effect of hepatic steatosis on liver volume determined by proton density fat fraction and deep learning-measured liver volume. Eur Radiol 2023; 33:5924-5932. [PMID: 37012546 DOI: 10.1007/s00330-023-09603-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 02/03/2023] [Accepted: 02/22/2023] [Indexed: 04/05/2023]
Abstract
OBJECTIVES We aimed to evaluate the effect of hepatic steatosis (HS) on liver volume and to develop a formula to estimate lean liver volume correcting the HS effect. METHODS This retrospective study included healthy adult liver donors who underwent gadoxetic acid-enhanced MRI and proton density fat fraction (PDFF) measurement from 2015 to 2019. The degree of HS was graded at 5% PDFF intervals from grade 0 (no HS; PDFF < 5.5%). Liver volume was measured with hepatobiliary phase MRI using deep learning algorithm, and standard liver volume (SLV) was calculated as the reference lean liver volume. The association between liver volume and SLV ratio with PDFF grades was evaluated using Spearman's correlation (ρ). The effect of PDFF grades on liver volume was evaluated using the multivariable linear regression model. RESULTS The study population included 1038 donors (mean age, 31 ± 9 years; 689 men). Mean liver volume to SLV ratio increased according to PDFF grades (ρ = 0.234, p < 0.001). The multivariable analysis indicated that SLV (β = 1.004, p < 0.001) and PDFF grade*SLV (β = 0.044, p < 0.001) independently affected liver volume, suggesting a 4.4% increase in liver volume per one-point increment in the PDFF grade. PDFF-adjusted lean liver volume was estimated using the formula, liver volume/[1.004 + 0.044 × PDFF grade]. The mean estimated lean liver volume to SLV ratio approximated to one for all PDFF grades, with no significant association with PDFF grades (p = 0.851). CONCLUSION HS increases liver volume. The formula to estimate lean liver volume may be useful to adjust for the effect of HS on liver volume. KEY POINTS • Hepatic steatosis increases liver volume. • The presented formula to estimate lean liver volume using MRI-measured proton density fat fraction and liver volume may be useful to adjust for the effect of hepatic steatosis on measured liver volume.
Collapse
Affiliation(s)
- Ji Young Choi
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
- Department of Radiology, Kangbuk Samsung Hospital, Sungkyunkwan University College of Medicine, Seoul, Republic of Korea
| | - Seung Soo Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| | - Na Young Kim
- Department of Clinical Epidemiology and Biostatistics, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Hyo Jung Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Yu Sub Sung
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Yedaun Lee
- Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
- Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| |
Collapse
|
7
|
Park C, Jung W, Suk HI. Deep joint learning of pathological region localization and Alzheimer's disease diagnosis. Sci Rep 2023; 13:11664. [PMID: 37468538 DOI: 10.1038/s41598-023-38240-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 07/05/2023] [Indexed: 07/21/2023] Open
Abstract
The identification of Alzheimer's disease (AD) using structural magnetic resonance imaging (sMRI) has been studied based on the subtle morphological changes in the brain. One of the typical approaches is a deep learning-based patch-level feature representation. For this approach, however, the predetermined patches before learning the diagnostic model can limit classification performance. To mitigate this problem, we propose the BrainBagNet with a position-based gate (PG), which applies position information of brain images represented through the 3D coordinates. Our proposed method represents the patch-level class evidence based on both MR scan and position information for image-level prediction. To validate the effectiveness of our proposed framework, we conducted comprehensive experiments comparing it with state-of-the-art methods, utilizing two publicly available datasets: the Alzheimer's Disease Neuroimaging Initiative (ADNI) and the Australian Imaging, Biomarkers and Lifestyle (AIBL) dataset. Furthermore, our experimental results demonstrate that our proposed method outperforms the existing competing methods in terms of classification performance for both AD diagnosis and mild cognitive impairment conversion prediction tasks. In addition, we performed various analyses of the results from diverse perspectives to obtain further insights into the underlying mechanisms and strengths of our proposed framework. Based on the results of our experiments, we demonstrate that our proposed framework has the potential to advance deep-learning-based patch-level feature representation studies for AD diagnosis and MCI conversion prediction. In addition, our method provides valuable insights, such as interpretability, and the ability to capture subtle changes, into the underlying pathological processes of AD and MCI, benefiting both researchers and clinicians.
Collapse
Affiliation(s)
- Changhyun Park
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Wonsik Jung
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea.
- Department of Artificial Intelligence, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
8
|
Mulyadi AW, Jung W, Oh K, Yoon JS, Lee KH, Suk HI. Estimating explainable Alzheimer's disease likelihood map via clinically-guided prototype learning. Neuroimage 2023; 273:120073. [PMID: 37037063 DOI: 10.1016/j.neuroimage.2023.120073] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 03/03/2023] [Accepted: 03/30/2023] [Indexed: 04/12/2023] Open
Abstract
Identifying Alzheimer's disease (AD) involves a deliberate diagnostic process owing to its innate traits of irreversibility with subtle and gradual progression. These characteristics make AD biomarker identification from structural brain imaging (e.g., structural MRI) scans quite challenging. Using clinically-guided prototype learning, we propose a novel deep-learning approach through eXplainable AD Likelihood Map Estimation (XADLiME) for AD progression modeling over 3D sMRIs. Specifically, we establish a set of topologically-aware prototypes onto the clusters of latent clinical features, uncovering an AD spectrum manifold. Considering this pseudo map as an enriched reference, we employ an estimating network to approximate the AD likelihood map over a 3D sMRI scan. Additionally, we promote the explainability of such a likelihood map by revealing a comprehensible overview from clinical and morphological perspectives. During the inference, this estimated likelihood map served as a substitute for unseen sMRI scans for effectively conducting the downstream task while providing thorough explainable states.
Collapse
Affiliation(s)
- Ahmad Wisnu Mulyadi
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Wonsik Jung
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Kwanseok Oh
- Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Kun Ho Lee
- Gwangju Alzheimer's & Related Dementia Cohort Research Center, Chosun University, Gwangju 61452, Republic of Korea; Department of Biomedical Science, Chosun University, Gwangju 61452, Republic of Korea; Korea Brain Research Institute, Daegu 41062, Republic of Korea
| | - Heung-Il Suk
- Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea; Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| |
Collapse
|
9
|
Oh K, Yoon JS, Suk HI. Learn-Explain-Reinforce: Counterfactual Reasoning and its Guidance to Reinforce an Alzheimer's Disease Diagnosis Model. IEEE Trans Pattern Anal Mach Intell 2023; 45:4843-4857. [PMID: 35947563 DOI: 10.1109/tpami.2022.3197845] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Existing studies on disease diagnostic models focus either on diagnostic model learning for performance improvement or on the visual explanation of a trained diagnostic model. We propose a novel learn-explain-reinforce (LEAR) framework that unifies diagnostic model learning, visual explanation generation (explanation unit), and trained diagnostic model reinforcement (reinforcement unit) guided by the visual explanation. For the visual explanation, we generate a counterfactual map that transforms an input sample to be identified as an intended target label. For example, a counterfactual map can localize hypothetical abnormalities within a normal brain image that may cause it to be diagnosed with Alzheimer's disease (AD). We believe that the generated counterfactual maps represent data-driven knowledge about a target task, i.e., AD diagnosis using structural MRI, which can be a vital source of information to reinforce the generalization of the trained diagnostic model. To this end, we devise an attention-based feature refinement module with the guidance of the counterfactual maps. The explanation and reinforcement units are reciprocal and can be operated iteratively. Our proposed approach was validated via qualitative and quantitative analysis on the ADNI dataset. Its comprehensibility and fidelity were demonstrated through ablation studies and comparisons with existing methods.
Collapse
|
10
|
Kim WS, Heo DW, Shen J, Tsogt U, Odkhuu S, Kim SW, Suk HI, Ham BJ, Rami FZ, Kang CY, Sui J, Chung YC. Stage-Specific Brain Aging in First-Episode Schizophrenia and Treatment-Resistant Schizophrenia. Int J Neuropsychopharmacol 2023; 26:207-216. [PMID: 36545813 PMCID: PMC10032294 DOI: 10.1093/ijnp/pyac080] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 11/18/2022] [Accepted: 12/20/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Brain age is a popular brain-based biomarker that offers a powerful strategy for using neuroscience in clinical practice. We investigated the brain-predicted age difference (PAD) in patients with schizophrenia (SCZ), first-episode schizophrenia spectrum disorders (FE-SSDs), and treatment-resistant schizophrenia (TRS) using structural magnetic resonance imaging data. The association between brain-PAD and clinical parameters was also assessed. METHODS We developed brain age prediction models for the association between 77 average structural brain measures and age in a training sample of controls (HCs) using ridge regression, support vector regression, and relevance vector regression. The trained models in the controls were applied to the test samples of the controls and 3 patient groups to obtain brain-based age estimates. The correlations were tested between the brain PAD and clinical measures in the patient groups. RESULTS Model performance indicated that, regardless of the type of regression metric, the best model was support vector regression and the worst model was relevance vector regression for the training HCs. Accelerated brain aging was identified in patients with SCZ, FE-SSDs, and TRS compared with the HCs. A significant difference in brain PAD was observed between FE-SSDs and TRS using the ridge regression algorithm. Symptom severity, the Social and Occupational Functioning Assessment Scale, chlorpromazine equivalents, and cognitive function were correlated with the brain PAD in the patient groups. CONCLUSIONS These findings suggest additional progressive neuronal changes in the brain after SCZ onset. Therefore, pharmacological or psychosocial interventions targeting brain health should be developed and provided during the early course of SCZ.
Collapse
Affiliation(s)
- Woo-Sung Kim
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea
| | - Da-Woon Heo
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
- Department of Artificial Intelligence, Korea University, Seoul, Korea
| | - Jie Shen
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea
- Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
| | - Uyanga Tsogt
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea
- Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
| | - Soyolsaikhan Odkhuu
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea
- Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
| | - Sung-Wan Kim
- Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
- Department of Psychiatry, Chonnam National University Medical School, Gwangju, Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
- Department of Artificial Intelligence, Korea University, Seoul, Korea
| | - Byung-Joo Ham
- Department of Psychiatry, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Korea
| | - Fatima Zahra Rami
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea
- Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
| | - Chae Yeong Kang
- Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
| | - Jing Sui
- Tri-Institutional Center for Translational Research in Neuroimaging and Data Science (TReNDS), Georgia State University, Georgia Institute of Technology, Emory University, Atlanta, Georgia, USA
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing, China
| | - Young-Chul Chung
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea
- Department of Psychiatry, Jeonbuk National University Hospital, Jeonju, Korea
- Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
| |
Collapse
|
11
|
Jeon E, Ko W, Yoon JS, Suk HI. Mutual Information-Driven Subject-Invariant and Class-Relevant Deep Representation Learning in BCI. IEEE Trans Neural Netw Learn Syst 2023; 34:739-749. [PMID: 34357871 DOI: 10.1109/tnnls.2021.3100583] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
In recent years, deep learning-based feature representation methods have shown a promising impact on electroencephalography (EEG)-based brain-computer interface (BCI). Nonetheless, owing to high intra- and inter-subject variabilities, many studies on decoding EEG were designed in a subject-specific manner by using calibration samples, with no concern of its practical use, hampered by time-consuming steps and a large data requirement. To this end, recent studies adopted a transfer learning strategy, especially domain adaptation techniques. Among those, we have witnessed the potential of adversarial learning-based transfer learning in BCIs. In the meantime, it is known that adversarial learning-based domain adaptation methods are prone to negative transfer that disrupts learning generalized feature representations, applicable to diverse domains, for example, subjects or sessions in BCIs. In this article, we propose a novel framework that learns class-relevant and subject-invariant feature representations in an information-theoretic manner, without using adversarial learning. To be specific, we devise two operational components in a deep network that explicitly estimate mutual information between feature representations: 1) to decompose features in an intermediate layer into class-relevant and class-irrelevant ones and 2) to enrich class-discriminative feature representation. On two large EEG datasets, we validated the effectiveness of our proposed framework by comparing with several comparative methods in performance. Furthermore, we conducted rigorous analyses by performing an ablation study in regard to the components in our network, explaining our model's decision on input EEG signals via layer-wise relevance propagation, and visualizing the distribution of learned features via t-SNE.
Collapse
|
12
|
Suk HI, Liu M, Cao X, Kim J. Editorial: Advances in deep learning methods for medical image analysis. Front Radiol 2023; 2:1097533. [PMID: 37492688 PMCID: PMC10365016 DOI: 10.3389/fradi.2022.1097533] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 12/06/2022] [Indexed: 07/27/2023]
Affiliation(s)
- Heung-Il Suk
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Mingxia Liu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| | - Xiaohuan Cao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jaeil Kim
- School of Computer Science & Engineering, Kyungpook National University, Daegu, South Korea
| |
Collapse
|
13
|
Heo S, Lee SS, Kim SY, Lim YS, Park HJ, Yoon JS, Suk HI, Sung YS, Park B, Lee JS. Prediction of Decompensation and Death in Advanced Chronic Liver Disease Using Deep Learning Analysis of Gadoxetic Acid-Enhanced MRI. Korean J Radiol 2022; 23:1269-1280. [PMID: 36447415 PMCID: PMC9747270 DOI: 10.3348/kjr.2022.0494] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/12/2022] [Accepted: 10/11/2022] [Indexed: 11/18/2022] Open
Abstract
OBJECTIVE This study aimed to evaluate the usefulness of quantitative indices obtained from deep learning analysis of gadoxetic acid-enhanced hepatobiliary phase (HBP) MRI and their longitudinal changes in predicting decompensation and death in patients with advanced chronic liver disease (ACLD). MATERIALS AND METHODS We included patients who underwent baseline and 1-year follow-up MRI from a prospective cohort that underwent gadoxetic acid-enhanced MRI for hepatocellular carcinoma surveillance between November 2011 and August 2012 at a tertiary medical center. Baseline liver condition was categorized as non-ACLD, compensated ACLD, and decompensated ACLD. The liver-to-spleen signal intensity ratio (LS-SIR) and liver-to-spleen volume ratio (LS-VR) were automatically measured on the HBP images using a deep learning algorithm, and their percentage changes at the 1-year follow-up (ΔLS-SIR and ΔLS-VR) were calculated. The associations of the MRI indices with hepatic decompensation and a composite endpoint of liver-related death or transplantation were evaluated using a competing risk analysis with multivariable Fine and Gray regression models, including baseline parameters alone and both baseline and follow-up parameters. RESULTS Our study included 280 patients (153 male; mean age ± standard deviation, 57 ± 7.95 years) with non-ACLD, compensated ACLD, and decompensated ACLD in 32, 186, and 62 patients, respectively. Patients were followed for 11-117 months (median, 104 months). In patients with compensated ACLD, baseline LS-SIR (sub-distribution hazard ratio [sHR], 0.81; p = 0.034) and LS-VR (sHR, 0.71; p = 0.01) were independently associated with hepatic decompensation. The ΔLS-VR (sHR, 0.54; p = 0.002) was predictive of hepatic decompensation after adjusting for baseline variables. ΔLS-VR was an independent predictor of liver-related death or transplantation in patients with compensated ACLD (sHR, 0.46; p = 0.026) and decompensated ACLD (sHR, 0.61; p = 0.023). CONCLUSION MRI indices automatically derived from the deep learning analysis of gadoxetic acid-enhanced HBP MRI can be used as prognostic markers in patients with ACLD.
Collapse
Affiliation(s)
- Subin Heo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea.,Department of Radiology, Ajou University School of Medicine, Suwon, Korea
| | - Seung Soo Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - So Yeon Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Young-Suk Lim
- Department of Gastroenterology, Liver Center, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Hyo Jung Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea.,Department of Artificial Intelligence, Korea University, Seoul, Korea
| | - Yu Sub Sung
- Clinical Research Center, Asan Medical Center, Seoul, Korea
| | - Bumwoo Park
- Health Innovation Big Data Center, Asan Institute for Life Sciences, Asan Medical Center, Seoul, Korea
| | - Ji Sung Lee
- Department of Clinical Epidemiology and Biostatistics, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Korea
| |
Collapse
|
14
|
Kim WS, Heo DW, Shen J, Tsogt U, Odkhuu S, Lee J, Kang E, Kim SW, Suk HI, Chung YC. Altered functional connectivity in psychotic disorder not otherwise specified. Psychiatry Res 2022; 317:114871. [PMID: 36209668 DOI: 10.1016/j.psychres.2022.114871] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Revised: 09/16/2022] [Accepted: 09/28/2022] [Indexed: 01/05/2023]
Abstract
BACKGROUND Few studies have investigated functional connectivity (FC) in patients with psychotic disorder not otherwise specified (PNOS). We sought to identify distinct FC differentiating PNOS from schizophrenia (SZ). METHODS In total, 49 patients with PNOS, 42 with SZ, and 55 healthy controls (HC) matched for age, sex, and education underwent functional magnetic resonance imaging (fMRI) brain scans and clinical evaluation. Using six functional networks consisting of 40 regions of interest (ROIs), we conducted ROI to ROI and intra- and inter-network FC analyses using resting-state fMRI (rs-fMRI) data. Correlations of altered FC with symptomatology were explored. RESULTS We found common brain connectomics in PNOS and SZ including thalamo-cortical (especially superior temporal gyrus) hyperconnectivity, thalamo-cerebellar hypoconnectivity, and reduced within-thalamic connectivity compared to HC. Additionally, features differentiating the two patient groups included hyperconnectivity between the thalamic subregion and anterior cingulate cortex in PNOS compared to SZ and hyperconnectivity of the thalamic subregions with the posterior cingulate cortex and precentral gyrus in SZ compared to PNOS. CONCLUSIONS These findings suggest that PNOS and SZ exhibit both common and differentiating changes in neuronal connectivity. Furthermore, they may support the hypothesis that PNOS should be treated as a separate clinical syndrome with distinct neural connectomics.
Collapse
Affiliation(s)
- Woo-Sung Kim
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea; Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
| | - Da-Woon Heo
- Machine Intelligence Laboratory, Department of Artificial Intelligence, Korea University, Seoul, Korea
| | - Jie Shen
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea; Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
| | - Uyanga Tsogt
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea; Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
| | - Soyolsaikhan Odkhuu
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea; Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
| | - Jaein Lee
- Machine Intelligence Laboratory, Department of Brain & Cognitive Engineering, Korea University, Seoul, Korea
| | - Eunsong Kang
- Machine Intelligence Laboratory, Department of Brain & Cognitive Engineering, Korea University, Seoul, Korea
| | - Sung-Wan Kim
- Department of Psychiatry, Chonnam National University Medical School, Gwangju, Korea
| | - Heung-Il Suk
- Machine Intelligence Laboratory, Department of Artificial Intelligence, Korea University, Seoul, Korea; Machine Intelligence Laboratory, Department of Brain & Cognitive Engineering, Korea University, Seoul, Korea.
| | - Young-Chul Chung
- Department of Psychiatry, Jeonbuk National University, Medical School, Jeonju, Korea; Department of Psychiatry, Jeonbuk National University Hospital, Jeonju, Korea; Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea.
| |
Collapse
|
15
|
Phyo J, Ko W, Jeon E, Suk HI. TransSleep: Transitioning-Aware Attention-Based Deep Neural Network for Sleep Staging. IEEE Trans Cybern 2022; PP:4500-4510. [PMID: 36063512 DOI: 10.1109/tcyb.2022.3198997] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Sleep staging is essential for sleep assessment and plays a vital role as a health indicator. Many recent studies have devised various machine/deep learning methods for sleep staging. However, two key challenges hinder the practical use of those methods: 1) effectively capturing salient waveforms in sleep signals and 2) correctly classifying confusing stages in transitioning epochs. In this study, we propose a novel deep neural-network structure, TransSleep, that captures distinctive local temporal patterns and distinguishes confusing stages using two auxiliary tasks. In particular, TransSleep captures salient waveforms in sleep signals by an attention-based multiscale feature extractor and correctly classifies confusing stages in transitioning epochs, while modeling contextual relationships with two auxiliary tasks. Results show that TransSleep achieves promising performance in automatic sleep staging. The validity of TransSleep is demonstrated by its state-of-the-art performance on two publicly available datasets: 1) Sleep-EDF and 2) MASS. Furthermore, we performed ablations to analyze our results from different perspectives. Based on our overall results, we believe that TransSleep has immense potential to provide new insights into deep-learning-based sleep staging.
Collapse
|
16
|
Abstract
Electronic health records (EHR) consist of longitudinal clinical observations portrayed with sparsity, irregularity, and high dimensionality, which become major obstacles in drawing reliable downstream clinical outcomes. Although there exist great numbers of imputation methods to tackle these issues, most of them ignore correlated features, temporal dynamics, and entirely set aside the uncertainty. Since the missing value estimates involve the risk of being inaccurate, it is appropriate for the method to handle the less certain information differently than the reliable data. In that regard, we can use the uncertainties in estimating the missing values as the fidelity score to be further utilized to alleviate the risk of biased missing value estimates. In this work, we propose a novel variational-recurrent imputation network, which unifies an imputation and a prediction network by taking into account the correlated features, temporal dynamics, as well as uncertainty. Specifically, we leverage the deep generative model in the imputation, which is based on the distribution among variables, and a recurrent imputation network to exploit the temporal relations, in conjunction with utilization of the uncertainty. We validated the effectiveness of our proposed model on two publicly available real-world EHR datasets: 1) PhysioNet Challenge 2012 and 2) MIMIC-III, and compared the results with other competing state-of-the-art methods in the literature.
Collapse
|
17
|
Ko W, Jung W, Jeon E, Suk HI. A Deep Generative-Discriminative Learning for Multimodal Representation in Imaging Genetics. IEEE Trans Med Imaging 2022; 41:2348-2359. [PMID: 35344489 DOI: 10.1109/tmi.2022.3162870] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Imaging genetics, one of the foremost emerging topics in the medical imaging field, analyzes the inherent relations between neuroimaging and genetic data. As deep learning has gained widespread acceptance in many applications, pioneering studies employed deep learning frameworks for imaging genetics. However, existing approaches suffer from some limitations. First, they often adopt a simple strategy for joint learning of phenotypic and genotypic features. Second, their findings have not been extended to biomedical applications, e.g., degenerative brain disease diagnosis and cognitive score prediction. Finally, existing studies perform insufficient and inappropriate analyses from the perspective of data science and neuroscience. In this work, we propose a novel deep learning framework to simultaneously tackle the aforementioned issues. Our proposed framework learns to effectively represent the neuroimaging and the genetic data jointly, and achieves state-of-the-art performance when used for Alzheimer's disease and mild cognitive impairment identification. Furthermore, unlike the existing methods, the framework enables learning the relation between imaging phenotypes and genotypes in a nonlinear way without any prior neuroscientific knowledge. To demonstrate the validity of our proposed framework, we conducted experiments on a publicly available dataset and analyzed the results from diverse perspectives. Based on our experimental results, we believe that the proposed framework has immense potential to provide new insights and perspectives in deep learning-based imaging genetics studies.
Collapse
|
18
|
Yoon JS, Roh MC, Suk HI. A Plug-in Method for Representation Factorization in Connectionist Models. IEEE Trans Neural Netw Learn Syst 2022; 33:3792-3803. [PMID: 33566769 DOI: 10.1109/tnnls.2021.3054480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
In this article, we focus on decomposing latent representations in generative adversarial networks or learned feature representations in deep autoencoders into semantically controllable factors in a semisupervised manner, without modifying the original trained models. Particularly, we propose factors' decomposer-entangler network (FDEN) that learns to decompose a latent representation into mutually independent factors. Given a latent representation, the proposed framework draws a set of interpretable factors, each aligned to independent factors of variations by minimizing their total correlation in an information-theoretic means. As a plug-in method, we have applied our proposed FDEN to the existing networks of adversarially learned inference and pioneer network and performed computer vision tasks of image-to-image translation in semantic ways, e.g., changing styles, while keeping the identity of a subject, and object classification in a few-shot learning scheme. We have also validated the effectiveness of the proposed method with various ablation studies in the qualitative, quantitative, and statistical examination.
Collapse
|
19
|
Lee Y, Jun E, Choi J, Suk HI. Multi-view Integrative Attention-based Deep Representation Learning for Irregular Clinical Time-series Data. IEEE J Biomed Health Inform 2022; 26:4270-4280. [PMID: 35511839 DOI: 10.1109/jbhi.2022.3172549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Electronic health record (EHR) data are sparse and irregular as they are recorded at irregular time intervals, and different clinical variables are measured at each observation point. In this work, to handle irregular multivariate time-series data, we consider the human knowledge of the aspects to be measured and time to measure them in different situations, known as multi-view features, which are indirectly represented in the data. We propose a scheme to realize multi-view features integration learning via a self-attention mechanism. Specifically, we devise a novel multi-integration attention module (MIAM) to extract complex information that is inherent in irregular time-series data. We explicitly learn the relationships among the observed values, missing indicators, and time interval between the consecutive observations in a simultaneous manner. In addition, we build an attention-based decoder as a missing value imputer that helps empower the representation learning of the interrelations among multi-view observations for the prediction task this decoder operates only in the training phase so that the final model is implemented in an imputation-free manner. We validated the effectiveness of our method over the public MIMIC-III and PhysioNet challenge 2012 datasets by comparing with and outperforming the state-of-the-art methods in three downstream tasks i.e., prediction of the in-hospital mortality, prediction of the length of stay, and phenotyping. Moreover, we conduct a layer-wise relevance propagation (LRP) analysis based on case studies to highlight the explainability of the trained model.
Collapse
|
20
|
Ko W, Jeon E, Yoon JS, Suk HI. Semi-supervised generative and discriminative adversarial learning for motor imagery-based brain-computer interface. Sci Rep 2022; 12:4587. [PMID: 35301366 PMCID: PMC8931045 DOI: 10.1038/s41598-022-08490-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Accepted: 02/28/2022] [Indexed: 11/22/2022] Open
Abstract
Convolutional neural networks (CNNs), which can recognize structural/configuration patterns in data with different architectures, have been studied for feature extraction. However, challenges remain regarding leveraging advanced deep learning methods in BCIs. We focus on problems of small-sized training samples and interpretability of the learned parameters and leverages a semi-supervised generative and discriminative learning framework that effectively utilizes synthesized samples with real samples to discover class-discriminative features. Our framework learns the distributional characteristics of EEG signals in an embedding space using a generative model. By using artificially generated and real EEG signals, our framework finds class-discriminative spatio-temporal feature representations that help to correctly discriminate input EEG signals. It is noteworthy that the framework facilitates the exploitation of real, unlabeled samples to better uncover the underlying patterns inherent in a user’s EEG signals. To validate our framework, we conducted experiments comparing our method with conventional linear models by utilizing variants of three existing CNN architectures as generator networks and measuring the performance on three public datasets. Our framework exhibited statistically significant improvements over the competing methods. We investigated the learned network via activation pattern maps and visualized generated artificial samples to empirically justify the stability and neurophysiological plausibility of our model.
Collapse
Affiliation(s)
- Wonjun Ko
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Eunjin Jeon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, 02841, Republic of Korea. .,Department of Artificial Intelligence, Korea University, Seoul, 02841, Republic of Korea.
| |
Collapse
|
21
|
Park HJ, Yoon JS, Lee SS, Suk HI, Park B, Sung YS, Hong SB, Ryu H. Deep Learning-Based Assessment of Functional Liver Capacity Using Gadoxetic Acid-Enhanced Hepatobiliary Phase MRI. Korean J Radiol 2022; 23:720-731. [PMID: 35434977 PMCID: PMC9240292 DOI: 10.3348/kjr.2021.0892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 01/12/2022] [Accepted: 01/13/2022] [Indexed: 11/15/2022] Open
Affiliation(s)
- Hyo Jung Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
| | - Seung Soo Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
- Department of Artificial Intelligence, Korea University, Seoul, Korea
| | - Bumwoo Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Yu Sub Sung
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Seung Baek Hong
- Department of Radiology, Pusan National University Hospital, Busan, Korea
| | - Hwaseong Ryu
- Department of Radiology, Pusan National University Yangsan Hospital, Yangsan, Korea
| |
Collapse
|
22
|
Kwon JH, Lee SS, Yoon JS, Suk HI, Sung YS, Kim HS, Lee CM, Kim KM, Lee SJ, Kim SY. Liver-to-Spleen Volume Ratio Automatically Measured on CT Predicts Decompensation in Patients with B Viral Compensated Cirrhosis. Korean J Radiol 2021; 22:1985-1995. [PMID: 34564961 PMCID: PMC8628160 DOI: 10.3348/kjr.2021.0348] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 06/03/2021] [Accepted: 06/30/2021] [Indexed: 01/05/2023] Open
Abstract
Objective Although the liver-to-spleen volume ratio (LSVR) based on CT reflects portal hypertension, its prognostic role in cirrhotic patients has not been proven. We evaluated the utility of LSVR, automatically measured from CT images using a deep learning algorithm, as a predictor of hepatic decompensation and transplantation-free survival in patients with hepatitis B viral (HBV)-compensated cirrhosis. Materials and Methods A deep learning algorithm was used to measure the LSVR in a cohort of 1027 consecutive patients (mean age, 50.5 years; 675 male and 352 female) with HBV-compensated cirrhosis who underwent liver CT (2007–2010). Associations of LSVR with hepatic decompensation and transplantation-free survival were evaluated using multivariable Cox proportional hazards and competing risk analyses, accounting for either the Child-Pugh score (CPS) or Model for End Stage Liver Disease (MELD) score and other variables. The risk of the liver-related events was estimated using Kaplan-Meier analysis and the Aalen-Johansen estimator. Results After adjustment for either CPS or MELD and other variables, LSVR was identified as a significant independent predictor of hepatic decompensation (hazard ratio for LSVR increase by 1, 0.71 and 0.68 for CPS and MELD models, respectively; p < 0.001) and transplantation-free survival (hazard ratio for LSVR increase by 1, 0.8 and 0.77, respectively; p < 0.001). Patients with an LSVR of < 2.9 (n = 381) had significantly higher 3-year risks of hepatic decompensation (16.7% vs. 2.5%, p < 0.001) and liver-related death or transplantation (10.0% vs. 1.1%, p < 0.001) than those with an LSVR ≥ 2.9 (n = 646). When patients were stratified according to CPS (Child-Pugh A vs. B–C) and MELD (< 10 vs. ≥ 10), an LSVR of < 2.9 was still associated with a higher risk of liver-related events than an LSVR of ≥ 2.9 for all Child-Pugh (p ≤ 0.045) and MELD (p ≤ 0.009) stratifications. Conclusion The LSVR measured on CT can predict hepatic decompensation and transplantation-free survival in patients with HBV-compensated cirrhosis.
Collapse
Affiliation(s)
- Ji Hye Kwon
- Department of Radiology, Good-Jang Hospital, Seoul, Korea
| | - Seung Soo Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea.
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea.,Department of Artificial Intelligence, Korea University, Seoul, Korea
| | - Yu Sub Sung
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Ho Sung Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - Chul-Min Lee
- Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Korea
| | - Kang Mo Kim
- Department of Gastroenterology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - So Jung Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| | - So Yeon Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Korea
| |
Collapse
|
23
|
Jun E, Mulyadi AW, Choi J, Suk HI. Uncertainty-Gated Stochastic Sequential Model for EHR Mortality Prediction. IEEE Trans Neural Netw Learn Syst 2021; 32:4052-4062. [PMID: 32841128 DOI: 10.1109/tnnls.2020.3016670] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Electronic health records (EHRs) are characterized as nonstationary, heterogeneous, noisy, and sparse data; therefore, it is challenging to learn the regularities or patterns inherent within them. In particular, sparseness caused mostly by many missing values has attracted the attention of researchers who have attempted to find a better use of all available samples for determining the solution of a primary target task through defining a secondary imputation problem. Methodologically, existing methods, either deterministic or stochastic, have applied different assumptions to impute missing values. However, once the missing values are imputed, most existing methods do not consider the fidelity or confidence of the imputed values in the modeling of downstream tasks. Undoubtedly, an erroneous or improper imputation of missing variables can cause difficulties in the modeling as well as a degraded performance. In this study, we present a novel variational recurrent network that: 1) estimates the distribution of missing variables (e.g., the mean and variance) allowing to represent uncertainty in the imputed values; 2) updates hidden states by explicitly applying fidelity based on a variance of the imputed values during a recurrence (i.e., uncertainty propagation over time); and 3) predicts the possibility of in-hospital mortality. It is noteworthy that our model can conduct these procedures in a single stream and learn all network parameters jointly in an end-to-end manner. We validated the effectiveness of our method using the public data sets of MIMIC-III and PhysioNet challenge 2012 by comparing with and outperforming other state-of-the-art methods for mortality prediction considered in our experiments. In addition, we identified the behavior of the model that well represented the uncertainties for the imputed estimates, which showed a high correlation between the uncertainties and mean absolute error (MAE) scores for imputation.
Collapse
|
24
|
Kim DW, Ha J, Lee SS, Kwon JH, Kim NY, Sung YS, Yoon JS, Suk HI, Lee Y, Kang BK. Population-based and Personalized Reference Intervals for Liver and Spleen Volumes in Healthy Individuals and Those with Viral Hepatitis. Radiology 2021; 301:339-347. [PMID: 34402668 DOI: 10.1148/radiol.2021204183] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Background Reference intervals guiding volumetric assessment of the liver and spleen have yet to be established. Purpose To establish population-based and personalized reference intervals for liver volume, spleen volume, and liver-to-spleen volume ratio (LSVR). Materials and Methods This retrospective study consecutively included healthy adult liver donors from 2001 to 2013 (reference group) and from 2014 to 2016 (healthy validation group) and patients with viral hepatitis from 2007 to 2017. Liver volume, spleen volume, and LSVR were measured with CT by using a deep learning algorithm. In the reference group, the reference intervals for the volume indexes were determined by using the population-based (ranges encompassing the central 95% of donors) and personalized (quantile regression modeling of the 2.5th and 97.5th percentiles as a function of age, sex, height, and weight) approaches. The validity of the reference intervals was evaluated in the healthy validation group and the viral hepatitis group. Results The reference and healthy validation groups had 2989 donors (mean age ± standard deviation, 30 years ± 9; 1828 men) and 472 donors (mean age, 30 years ± 9; 334 men), respectively. The viral hepatitis group had 158 patients (mean age, 48 years ± 12; 95 men). The population-based reference intervals were 824.5-1700.0 cm3 for liver volume, 81.1-322.0 cm3 for spleen volume, and 3.96-13.78 for LSVR. Formulae and a web calculator (https://i-pacs.com/calculators) were presented to calculate the personalized reference intervals. In the healthy validation group, both the population-based and personalized reference intervals were used to classify the volume indexes of 94%-96% of the donors as falling within the reference interval. In the viral hepatitis group, when compared with the population-based reference intervals, the personalized reference intervals helped identify more patients with volume indexes outside the reference interval (liver volume, 21.5% [34 of 158] vs 13.3% [21 of 158], P = .01; spleen volume, 29.1% [46 of 158] vs 22.2% [35 of 158], P = .01; LSVR, 35.4% [56 of 158] vs 26.6% [42 of 158], P < .001). Conclusion Reference intervals derived from a deep learning approach in healthy adults may enable evidence-based assessments of liver and spleen volume in clinical practice. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Ringl in this issue.
Collapse
Affiliation(s)
- Dong Wook Kim
- From the Department of Radiology and Research Institute of Radiology (D.W.K., J.H., S.S.L., J.H.K., Y.S.S.) and Department of Clinical Epidemiology and Biostatistics (N.Y.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; Department of Brain and Cognitive Engineering (J.S.Y., H.I.S.) and Department of Artificial Intelligence (H.I.S.), Korea University, Seoul, Republic of Korea; Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea (Y.L.); and Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Republic of Korea (B.K.K.)
| | - Jiyeon Ha
- From the Department of Radiology and Research Institute of Radiology (D.W.K., J.H., S.S.L., J.H.K., Y.S.S.) and Department of Clinical Epidemiology and Biostatistics (N.Y.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; Department of Brain and Cognitive Engineering (J.S.Y., H.I.S.) and Department of Artificial Intelligence (H.I.S.), Korea University, Seoul, Republic of Korea; Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea (Y.L.); and Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Republic of Korea (B.K.K.)
| | - Seung Soo Lee
- From the Department of Radiology and Research Institute of Radiology (D.W.K., J.H., S.S.L., J.H.K., Y.S.S.) and Department of Clinical Epidemiology and Biostatistics (N.Y.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; Department of Brain and Cognitive Engineering (J.S.Y., H.I.S.) and Department of Artificial Intelligence (H.I.S.), Korea University, Seoul, Republic of Korea; Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea (Y.L.); and Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Republic of Korea (B.K.K.)
| | - Ji Hye Kwon
- From the Department of Radiology and Research Institute of Radiology (D.W.K., J.H., S.S.L., J.H.K., Y.S.S.) and Department of Clinical Epidemiology and Biostatistics (N.Y.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; Department of Brain and Cognitive Engineering (J.S.Y., H.I.S.) and Department of Artificial Intelligence (H.I.S.), Korea University, Seoul, Republic of Korea; Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea (Y.L.); and Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Republic of Korea (B.K.K.)
| | - Na Young Kim
- From the Department of Radiology and Research Institute of Radiology (D.W.K., J.H., S.S.L., J.H.K., Y.S.S.) and Department of Clinical Epidemiology and Biostatistics (N.Y.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; Department of Brain and Cognitive Engineering (J.S.Y., H.I.S.) and Department of Artificial Intelligence (H.I.S.), Korea University, Seoul, Republic of Korea; Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea (Y.L.); and Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Republic of Korea (B.K.K.)
| | - Yu Sub Sung
- From the Department of Radiology and Research Institute of Radiology (D.W.K., J.H., S.S.L., J.H.K., Y.S.S.) and Department of Clinical Epidemiology and Biostatistics (N.Y.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; Department of Brain and Cognitive Engineering (J.S.Y., H.I.S.) and Department of Artificial Intelligence (H.I.S.), Korea University, Seoul, Republic of Korea; Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea (Y.L.); and Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Republic of Korea (B.K.K.)
| | - Jee Seok Yoon
- From the Department of Radiology and Research Institute of Radiology (D.W.K., J.H., S.S.L., J.H.K., Y.S.S.) and Department of Clinical Epidemiology and Biostatistics (N.Y.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; Department of Brain and Cognitive Engineering (J.S.Y., H.I.S.) and Department of Artificial Intelligence (H.I.S.), Korea University, Seoul, Republic of Korea; Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea (Y.L.); and Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Republic of Korea (B.K.K.)
| | - Heung-Il Suk
- From the Department of Radiology and Research Institute of Radiology (D.W.K., J.H., S.S.L., J.H.K., Y.S.S.) and Department of Clinical Epidemiology and Biostatistics (N.Y.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; Department of Brain and Cognitive Engineering (J.S.Y., H.I.S.) and Department of Artificial Intelligence (H.I.S.), Korea University, Seoul, Republic of Korea; Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea (Y.L.); and Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Republic of Korea (B.K.K.)
| | - Yedaun Lee
- From the Department of Radiology and Research Institute of Radiology (D.W.K., J.H., S.S.L., J.H.K., Y.S.S.) and Department of Clinical Epidemiology and Biostatistics (N.Y.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; Department of Brain and Cognitive Engineering (J.S.Y., H.I.S.) and Department of Artificial Intelligence (H.I.S.), Korea University, Seoul, Republic of Korea; Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea (Y.L.); and Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Republic of Korea (B.K.K.)
| | - Bo-Kyeong Kang
- From the Department of Radiology and Research Institute of Radiology (D.W.K., J.H., S.S.L., J.H.K., Y.S.S.) and Department of Clinical Epidemiology and Biostatistics (N.Y.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505, Republic of Korea; Department of Brain and Cognitive Engineering (J.S.Y., H.I.S.) and Department of Artificial Intelligence (H.I.S.), Korea University, Seoul, Republic of Korea; Department of Radiology, Haeundae Paik Hospital, Inje University College of Medicine, Busan, Republic of Korea (Y.L.); and Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, Seoul, Republic of Korea (B.K.K.)
| |
Collapse
|
25
|
Min BK, Kim HS, Ko W, Ahn MH, Suk HI, Pantazis D, Knight RT. Electrophysiological Decoding of Spatial and Color Processing in Human Prefrontal Cortex. Neuroimage 2021; 237:118165. [PMID: 34000400 PMCID: PMC8344402 DOI: 10.1016/j.neuroimage.2021.118165] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 04/30/2021] [Accepted: 05/11/2021] [Indexed: 11/16/2022] Open
Abstract
The prefrontal cortex (PFC) plays a pivotal role in goal-directed cognition, yet its representational code remains an open problem with decoding techniques ineffective in disentangling task-relevant variables from PFC. Here we applied regularized linear discriminant analysis to human scalp EEG data and were able to distinguish a mental-rotation task versus a color-perception task with 87% decoding accuracy. Dorsal and ventral areas in lateral PFC provided the dominant features dissociating the two tasks. Our findings show that EEG can reliably decode two independent task states from PFC and emphasize the PFC dorsal/ventral functional specificity in processing the where rotation task versus the what color task.
Collapse
Affiliation(s)
- Byoung-Kyong Min
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Korea; Department of Artificial Intelligence, Korea University, Seoul 02841, Korea.
| | - Hyun-Seok Kim
- Biomedical Engineering Research Center, Asan Institute of Life Science, Asan Medical Center, Seoul 05505, Korea
| | - Wonjun Ko
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Korea
| | - Min-Hee Ahn
- Laboratory of Brain and Cognitive Science for Convergence Medicine, College of Medicine, Hallym University, Anyang 14068, Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Korea; Department of Artificial Intelligence, Korea University, Seoul 02841, Korea
| | - Dimitrios Pantazis
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Robert T Knight
- Department of Psychology, Helen Wills Neuroscience Institute, UC Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
26
|
Ko W, Jeon E, Jeong S, Phyo J, Suk HI. A Survey on Deep Learning-Based Short/Zero-Calibration Approaches for EEG-Based Brain-Computer Interfaces. Front Hum Neurosci 2021; 15:643386. [PMID: 34140883 PMCID: PMC8204721 DOI: 10.3389/fnhum.2021.643386] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 04/27/2021] [Indexed: 11/28/2022] Open
Abstract
Brain-computer interfaces (BCIs) utilizing machine learning techniques are an emerging technology that enables a communication pathway between a user and an external system, such as a computer. Owing to its practicality, electroencephalography (EEG) is one of the most widely used measurements for BCI. However, EEG has complex patterns and EEG-based BCIs mostly involve a cost/time-consuming calibration phase; thus, acquiring sufficient EEG data is rarely possible. Recently, deep learning (DL) has had a theoretical/practical impact on BCI research because of its use in learning representations of complex patterns inherent in EEG. Moreover, algorithmic advances in DL facilitate short/zero-calibration in BCI, thereby suppressing the data acquisition phase. Those advancements include data augmentation (DA), increasing the number of training samples without acquiring additional data, and transfer learning (TL), taking advantage of representative knowledge obtained from one dataset to address the so-called data insufficiency problem in other datasets. In this study, we review DL-based short/zero-calibration methods for BCI. Further, we elaborate methodological/algorithmic trends, highlight intriguing approaches in the literature, and discuss directions for further research. In particular, we search for generative model-based and geometric manipulation-based DA methods. Additionally, we categorize TL techniques in DL-based BCIs into explicit and implicit methods. Our systematization reveals advances in the DA and TL methods. Among the studies reviewed herein, ~45% of DA studies used generative model-based techniques, whereas ~45% of TL studies used explicit knowledge transferring strategy. Moreover, based on our literature review, we recommend an appropriate DA strategy for DL-based BCIs and discuss trends of TLs used in DL-based BCIs.
Collapse
Affiliation(s)
- Wonjun Ko
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Eunjin Jeon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Seungwoo Jeong
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| | - Jaeun Phyo
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
27
|
Jung W, Jun E, Suk HI. Deep recurrent model for individualized prediction of Alzheimer's disease progression. Neuroimage 2021; 237:118143. [PMID: 33991694 DOI: 10.1016/j.neuroimage.2021.118143] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Revised: 03/15/2021] [Accepted: 04/13/2021] [Indexed: 01/27/2023] Open
Abstract
Alzheimer's disease (AD) is known as one of the major causes of dementia and is characterized by slow progression over several years, with no treatments or available medicines. In this regard, there have been efforts to identify the risk of developing AD in its earliest time. While many of the previous works considered cross-sectional analysis, more recent studies have focused on the diagnosis and prognosis of AD with longitudinal or time series data in a way of disease progression modeling. Under the same problem settings, in this work, we propose a novel computational framework that can predict the phenotypic measurements of MRI biomarkers and trajectories of clinical status along with cognitive scores at multiple future time points. However, in handling time series data, it generally faces many unexpected missing observations. In regard to such an unfavorable situation, we define a secondary problem of estimating those missing values and tackle it in a systematic way by taking account of temporal and multivariate relations inherent in time series data. Concretely, we propose a deep recurrent network that jointly tackles the four problems of (i) missing value imputation, (ii) phenotypic measurements forecasting, (iii) trajectory estimation of a cognitive score, and (iv) clinical status prediction of a subject based on his/her longitudinal imaging biomarkers. Notably, the learnable parameters of all the modules in our predictive models are trained in an end-to-end manner by taking the morphological features and cognitive scores as input, with our circumspectly defined loss function. In our experiments over The Alzheimers Disease Prediction Of Longitudinal Evolution (TADPOLE) challenge cohort, we measured performance for various metrics and compared our method to competing methods in the literature. Exhaustive analyses and ablation studies were also conducted to better confirm the effectiveness of our method.
Collapse
Affiliation(s)
- Wonsik Jung
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Eunji Jun
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea; Department of Artificial Intelligence, Korea University, Seoul 02841, Republic of Korea.
| | | |
Collapse
|
28
|
|
29
|
Lee J, Ko W, Kang E, Suk HI. A unified framework for personalized regions selection and functional relation modeling for early MCI identification. Neuroimage 2021; 236:118048. [PMID: 33878379 DOI: 10.1016/j.neuroimage.2021.118048] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 04/02/2021] [Indexed: 12/21/2022] Open
Abstract
Resting-state functional magnetic resonance imaging (rs-fMRI) has been widely adopted to investigate functional abnormalities in brain diseases. Rs-fMRI data is unsupervised in nature because the psychological and neurological labels are coarse-grained, and no accurate region-wise label is provided along with the complex co-activities of multiple regions. To the best of our knowledge, most studies regarding univariate group analysis or multivariate pattern recognition for brain disease identification have focused on discovering functional characteristics shared across subjects; however, they have paid less attention to individual properties of neural activities that result from different symptoms or degrees of abnormality. In this work, we propose a novel framework that can identify subjects with early-stage mild cognitive impairment (eMCI) and consider individual variability by learning functional relations from automatically selected regions of interest (ROIs) for each subject concurrently. In particular, we devise a deep neural network composed of a temporal embedding module, an ROI selection module, and a disease-identification module. Notably, the ROI selection module is equipped with a reinforcement learning mechanism so it adaptively selects ROIs to facilitate the learning of discriminative feature representations from a temporally embedded blood-oxygen-level-dependent signals. Furthermore, our method allows us to capture the functional relations of a subject-specific ROI subset through the use of a graph-based neural network. Our method considers individual characteristics for diagnosis, as opposed to most conventional methods that identify the same biomarkers across subjects within a group. Based on the ADNI cohort, we validate the effectiveness of our method by presenting the superior performance of our network in eMCI identification. Furthermore, we provide insightful neuroscientific interpretations by analyzing the regions selected for the eMCI classification.
Collapse
Affiliation(s)
- Jiyeon Lee
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Wonjun Ko
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Eunsong Kang
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea; Department of Artificial Intelligence, Korea University, Republic of Korea.
| | | |
Collapse
|
30
|
Lee CM, Lee SS, Choi WM, Kim KM, Sung YS, Lee S, Lee SJ, Yoon JS, Suk HI. An index based on deep learning-measured spleen volume on CT for the assessment of high-risk varix in B-viral compensated cirrhosis. Eur Radiol 2020; 31:3355-3365. [PMID: 33128186 DOI: 10.1007/s00330-020-07430-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 09/05/2020] [Accepted: 10/14/2020] [Indexed: 01/11/2023]
Abstract
OBJECTIVES Deep learning enables an automated liver and spleen volume measurements on CT. The purpose of this study was to develop an index combining liver and spleen volumes and clinical factors for detecting high-risk varices in B-viral compensated cirrhosis. METHODS This retrospective study included 419 patients with B-viral compensated cirrhosis who underwent endoscopy and CT from 2007 to 2008 (derivation cohort, n = 239) and from 2009 to 2010 (validation cohort, n = 180). The liver and spleen volumes were measured on CT images using a deep learning algorithm. Multivariable logistic regression analysis of the derivation cohort developed an index to detect endoscopically confirmed high-risk varix. The cumulative 5-year risk of varix bleeding was evaluated with patients stratified by their index values. RESULTS The index of spleen volume-to-platelet ratio was devised from the derivation cohort. In the validation cohort, the cutoff index value for balanced sensitivity and specificity (> 3.78) resulted in the sensitivity of 69.4% and the specificity of 78.5% for detecting high-risk varix, and the cutoff index value for high sensitivity (> 1.63) detected all high-risk varices. The index stratified all patients into the low (index value ≤ 1.63; n = 118), intermediate (n = 162), and high (index value > 3.78; n = 139) risk groups with cumulative 5-year incidences of varix bleeding of 0%, 1.0%, and 12.0%, respectively (p < .001). CONCLUSION The spleen volume-to-platelet ratio obtained using deep learning-based CT analysis is useful to detect high-risk varices and to assess the risk of varix bleeding. KEY POINTS • The criterion of spleen volume to platelet > 1.63 detected all high-risk varices in the validation cohort, while the absence of visible varix did not exclude all high-risk varices. • Visual varix grade ≥ 2 detected high-risk varix with a high specificity (96.5-100%). • Combining spleen volume-to-platelet ratio ≤ 1.63 and visual varix grade of 0 identified low-risk patients who had no high-risk varix and varix bleeding on 5-year follow-up.
Collapse
Affiliation(s)
- Chul-Min Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-Gu, Seoul, 05505, South Korea.,Department of Radiology, Hanyang University Medical Center, Hanyang University School of Medicine, 222-1 Wangsimni-ro, Seongdong-gu, Seoul, 04763, South Korea
| | - Seung Soo Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-Gu, Seoul, 05505, South Korea.
| | - Won-Mook Choi
- Department of Gastroenterology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea
| | - Kang Mo Kim
- Department of Gastroenterology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, South Korea
| | - Yu Sub Sung
- Clinical Research Center, Asan Medical Center, Seoul, South Korea
| | - Sunho Lee
- SmartCareworks Inc., 1201, 6, Changgyeonggung-ro, Jung-gu, Seoul, 04559, South Korea
| | - So Jung Lee
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-Gu, Seoul, 05505, South Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Anam-dong, Seongbuk-gu, Seoul, 02841, South Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Anam-dong, Seongbuk-gu, Seoul, 02841, South Korea.,Department of Artificial Intelligence, Korea University, 145 Anam-ro, Anam-dong, Seongbuk-gu, Seoul, South Korea
| |
Collapse
|
31
|
Jun E, Na KS, Kang W, Lee J, Suk HI, Ham BJ. Identifying resting-state effective connectivity abnormalities in drug-naïve major depressive disorder diagnosis via graph convolutional networks. Hum Brain Mapp 2020; 41:4997-5014. [PMID: 32813309 PMCID: PMC7643383 DOI: 10.1002/hbm.25175] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 07/13/2020] [Accepted: 08/01/2020] [Indexed: 02/06/2023] Open
Abstract
Major depressive disorder (MDD) is a leading cause of disability; its symptoms interfere with social, occupational, interpersonal, and academic functioning. However, the diagnosis of MDD is still made by phenomenological approach. The advent of neuroimaging techniques allowed numerous studies to use resting-state functional magnetic resonance imaging (rs-fMRI) and estimate functional connectivity for brain-disease identification. Recently, attempts have been made to investigate effective connectivity (EC) that represents causal relations among regions of interest. In the meantime, to identify meaningful phenotypes for clinical diagnosis, graph-based approaches such as graph convolutional networks (GCNs) have been leveraged recently to explore complex pairwise similarities in imaging/nonimaging features among subjects. In this study, we validate the use of EC for MDD identification by estimating its measures via a group sparse representation along with a structured equation modeling approach in a whole-brain data-driven manner from rs-fMRI. To distinguish drug-naïve MDD patients from healthy controls, we utilize spectral GCNs based on a population graph to successfully integrate EC and nonimaging phenotypic information. Furthermore, we devise a novel sensitivity analysis method to investigate the discriminant connections for MDD identification in our trained GCNs. Our experimental results validated the effectiveness of our method in various scenarios, and we identified altered connectivities associated with the diagnosis of MDD.
Collapse
Affiliation(s)
- Eunji Jun
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Kyoung-Sae Na
- Department of Psychiatry, Gachon University Gil Medical Center, Incheon, Republic of Korea
| | - Wooyoung Kang
- Department of Biomedical Sciences, Korea University College of Medicine, Seoul, Republic of Korea
| | - Jiyeon Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.,Department of Artificial Intelligence, Korea University, Seoul, Republic of Korea
| | - Byung-Joo Ham
- Department of Psychiatry, Korea University Anam Hospital, Korea University College of Medicine, Seoul, Republic of Korea
| |
Collapse
|
32
|
Shi Y, Suk HI, Gao Y, Lee SW, Shen D. Leveraging Coupled Interaction for Multimodal Alzheimer's Disease Diagnosis. IEEE Trans Neural Netw Learn Syst 2020; 31:186-200. [PMID: 30908241 DOI: 10.1109/tnnls.2019.2900077] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
As the population becomes older worldwide, accurate computer-aided diagnosis for Alzheimer's disease (AD) in the early stage has been regarded as a crucial step for neurodegeneration care in recent years. Since it extracts the low-level features from the neuroimaging data, previous methods regarded this computer-aided diagnosis as a classification problem that ignored latent featurewise relation. However, it is known that multiple brain regions in the human brain are anatomically and functionally interlinked according to the current neuroscience perspective. Thus, it is reasonable to assume that the extracted features from different brain regions are related to each other to some extent. Also, the complementary information between different neuroimaging modalities could benefit multimodal fusion. To this end, we consider leveraging the coupled interactions in the feature level and modality level for diagnosis in this paper. First, we propose capturing the feature-level coupled interaction using a coupled feature representation. Then, to model the modality-level coupled interaction, we present two novel methods: 1) the coupled boosting (CB) that models the correlation of pairwise coupled-diversity on both inconsistently and incorrectly classified samples between different modalities and 2) the coupled metric ensemble (CME) that learns an informative feature projection from different modalities by integrating the intrarelation and interrelation of training samples. We systematically evaluated our methods with the AD neuroimaging initiative data set. By comparison with the baseline learning-based methods and the state-of-the-art methods that are specially developed for AD/MCI (mild cognitive impairment) diagnosis, our methods achieved the best performance with accuracy of 95.0% and 80.7% (CB), 94.9% and 79.9% (CME) for AD/NC (normal control), and MCI/NC identification, respectively.
Collapse
|
33
|
Kim BC, Yoon JS, Choi JS, Suk HI. Multi-scale gradual integration CNN for false positive reduction in pulmonary nodule detection. Neural Netw 2019; 115:1-10. [PMID: 30909118 DOI: 10.1016/j.neunet.2019.03.003] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Revised: 12/24/2018] [Accepted: 03/07/2019] [Indexed: 12/22/2022]
Abstract
Lung cancer is a global and dangerous disease, and its early detection is crucial for reducing the risks of mortality. In this regard, it has been of great interest in developing a computer-aided system for pulmonary nodules detection as early as possible on thoracic CT scans. In general, a nodule detection system involves two steps: (i) candidate nodule detection at a high sensitivity, which captures many false positives and (ii) false positive reduction from candidates. However, due to the high variation of nodule morphological characteristics and the possibility of mistaking them for neighboring organs, candidate nodule detection remains a challenge. In this study, we propose a novel Multi-scale Gradual Integration Convolutional Neural Network (MGI-CNN), designed with three main strategies: (1) to use multi-scale inputs with different levels of contextual information, (2) to use abstract information inherent in different input scales with gradual integration, and (3) to learn multi-stream feature integration in an end-to-end manner. To verify the efficacy of the proposed network, we conducted exhaustive experiments on the LUNA16 challenge datasets by comparing the performance of the proposed method with state-of-the-art methods in the literature. On two candidate subsets of the LUNA16 dataset, i.e., V1 and V2, our method achieved an average CPM of 0.908 (V1) and 0.942 (V2), outperforming comparable methods by a large margin. Our MGI-CNN is implemented in Python using TensorFlow and the source code is available from https://github.com/ku-milab/MGICNN.
Collapse
Affiliation(s)
- Bum-Chae Kim
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Jee Seok Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Jun-Sik Choi
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.
| |
Collapse
|
34
|
Affiliation(s)
- Li Wang
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, USA.
| | - Yinghuan Shi
- Department of Computer Science and Technology, Nanjing University, PR China
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Alison Noble
- Biomedical Engineering, University of Oxford, UK
| | | |
Collapse
|
35
|
Zhu X, Suk HI, Shen D. Group sparse reduced rank regression for neuroimaging genetic study. World Wide Web 2019; 22:673-688. [PMID: 31607788 PMCID: PMC6788769 DOI: 10.1007/s11280-018-0637-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 07/19/2018] [Accepted: 09/07/2018] [Indexed: 06/10/2023]
Abstract
The neuroimaging genetic study usually needs to deal with high dimensionality of both brain imaging data and genetic data, so that often resulting in the issue of curse of dimensionality. In this paper, we propose a group sparse reduced rank regression model to take the relations of both the phenotypes and the genotypes for the neuroimaging genetic study. Specifically, we propose designing a graph sparsity constraint as well as a reduced rank constraint to simultaneously conduct subspace learning and feature selection. The group sparsity constraint conducts feature selection to identify genotypes highly related to neuroimaging data, while the reduced rank constraint considers the relations among neuroimaging data to conduct subspace learning in the feature selection model. Furthermore, an alternative optimization algorithm is proposed to solve the resulting objective function and is proved to achieve fast convergence. Experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset showed that the proposed method has superiority on predicting the phenotype data by the genotype data, than the alternative methods under comparison.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Guangxi Key Lab of Multi-source Information Mining and Security, Guangxi Normal University, Guilin 541004, Guangxi, People’s Republic of China
- Institute of Natural and Mathematical Sciences, Massey University, Auckland 0745, New Zealand
- BRIC Center of the University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
| | - Dinggang Shen
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
- BRIC Center of the University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
36
|
Abstract
In this paper, we propose a novel feature selection method by jointly considering (1) 'task-specific' relations between response variables (e.g., clinical labels in this work) and neuroimaging features and (2) 'self-representation' relations among neuroimaging features in a sparse regression framework. Specifically, the task-specific relation is devised to learn the relative importance of features for representation of response variables by a linear combination of the input features in a supervised manner, while the self-representation relation is used to take into account the inherent information among neuroimaging features such that any feature can be represented by a weighted sum of the other features, regardless of the label information, in an unsupervised manner. By integrating these two different relations along with a group sparsity constraint, we formulate a new sparse linear regression model for class-discriminative feature selection. The selected features are used to train a support vector machine for classification. To validate the effectiveness of the proposed method, we conducted experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset; experimental results showed superiority of the proposed method over the state-of-the-art methods considered in this work.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Dinggang Shen
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, USA.
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea.
| |
Collapse
|
37
|
Jun E, Kang E, Choi J, Suk HI. Modeling regional dynamics in low-frequency fluctuation and its application to Autism spectrum disorder diagnosis. Neuroimage 2019; 184:669-686. [PMID: 30248456 DOI: 10.1016/j.neuroimage.2018.09.043] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Revised: 09/14/2018] [Accepted: 09/17/2018] [Indexed: 01/07/2023] Open
|
38
|
Abstract
Fusing information from different imaging modalities is crucial for more accurate identification of the brain state because imaging data of different modalities can provide complementary perspectives on the complex nature of brain disorders. However, most existing fusion methods often extract features independently from each modality, and then simply concatenate them into a long vector for classification, without appropriate consideration of the correlation among modalities. In this paper, we propose a novel method to transform the original features from different modalities to a common space, where the transformed features become comparable and easy to find their relation, by canonical correlation analysis. We then perform the sparse multi-task learning for discriminative feature selection by using the canonical features as regressors and penalizing a loss function with a canonical regularizer. In our experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we use Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) images to jointly predict clinical scores of Alzheimer's Disease Assessment Scale-Cognitive subscale (ADAS-Cog) and Mini-Mental State Examination (MMSE) and also identify multi-class disease status for Alzheimer's disease diagnosis. The experimental results showed that the proposed canonical feature selection method helped enhance the performance of both clinical score prediction and disease status identification, outperforming the state-of-the-art methods.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seongbuk-gu, Republic of Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seongbuk-gu, Republic of Korea
| | - Dinggang Shen
- Department of Radiology and BRIC, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USA.
- Department of Brain and Cognitive Engineering, Korea University, Seongbuk-gu, Republic of Korea.
| |
Collapse
|
39
|
Abstract
In this paper, we propose a novel sparse regression method for Brain-Wide and Genome-Wide association study. Specifically, we impose a low-rank constraint on the weight coefficient matrix and then decompose it into two low-rank matrices, which find relationships in genetic features and in brain imaging features, respectively. We also introduce a sparse acyclic digraph with sparsity-inducing penalty to take further into account the correlations among the genetic variables, by which it can be possible to identify the representative SNPs that are highly associated with the brain imaging features. We optimize our objective function by jointly tackling low-rank regression and variable selection in a framework. In our method, the low-rank constraint allows us to conduct variable selection with the low-rank representations of the data; the learned low-sparsity weight coefficients allow discarding unimportant variables at the end. The experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset showed that the proposed method could select the important SNPs to more accurately estimate the brain imaging features than the state-of-the-art methods.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, and also with the Guangxi Key Lab of Multi-source Information Mining & Security, Guangxi Normal University, Guilin, Guangxi 541000, China
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 03760, Republic of Korea
| | - Heng Huang
- Electrical and Computer Engineering, University of Pittsburgh, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, NC 27514, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 03760, Republic of Korea
| |
Collapse
|
40
|
Kam TE, Suk HI, Lee SW. Multiple functional networks modeling for autism spectrum disorder diagnosis. Hum Brain Mapp 2017; 38:5804-5821. [PMID: 28845892 DOI: 10.1002/hbm.23769] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2016] [Revised: 07/25/2017] [Accepted: 08/07/2017] [Indexed: 11/07/2022] Open
Abstract
Despite countless studies on autism spectrum disorder (ASD), diagnosis relies on specific behavioral criteria and neuroimaging biomarkers for the disorder are still relatively scarce and irrelevant for diagnostic workup. Many researchers have focused on functional networks of brain activities using resting-state functional magnetic resonance imaging (rsfMRI) to diagnose brain diseases, including ASD. Although some existing methods are able to reveal the abnormalities in functional networks, they are either highly dependent on prior assumptions for modeling these networks or do not focus on latent functional connectivities (FCs) by considering discriminative relations among FCs in a nonlinear way. In this article, we propose a novel framework to model multiple networks of rsfMRI with data-driven approaches. Specifically, we construct large-scale functional networks with hierarchical clustering and find discriminative connectivity patterns between ASD and normal controls (NC). We then learn features and classifiers for each cluster through discriminative restricted Boltzmann machines (DRBMs). In the testing phase, each DRBM determines whether a test sample is ASD or NC, based on which we make a final decision with a majority voting strategy. We assess the diagnostic performance of the proposed method using public datasets and describe the effectiveness of our method by comparing it to competing methods. We also rigorously analyze FCs learned by DRBMs on each cluster and discover dominant FCs that play a major role in discriminating between ASD and NC. Hum Brain Mapp 38:5804-5821, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Tae-Eui Kam
- Department of Computer Science and Engineering, Korea University, Seoul, Republic of Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| |
Collapse
|
41
|
Abstract
This review covers computer-assisted analysis of images in the field of medical imaging. Recent advances in machine learning, especially with regard to deep learning, are helping to identify, classify, and quantify patterns in medical images. At the core of these advances is the ability to exploit hierarchical feature representations learned solely from data, instead of features designed by hand according to domain-specific knowledge. Deep learning is rapidly becoming the state of the art, leading to enhanced performance in various medical applications. We introduce the fundamentals of deep learning methods and review their successes in image registration, detection of anatomical and cellular structures, tissue segmentation, computer-aided disease diagnosis and prognosis, and so on. We conclude by discussing research issues and suggesting future directions for further improvement.
Collapse
Affiliation(s)
- Dinggang Shen
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| | - Guorong Wu
- Department of Radiology, University of North Carolina, Chapel Hill, North Carolina 27599;
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea;
| |
Collapse
|
42
|
Suk HI, Lee SW, Shen D. Deep ensemble learning of sparse regression models for brain disease diagnosis. Med Image Anal 2017; 37:101-113. [PMID: 28167394 PMCID: PMC5808465 DOI: 10.1016/j.media.2017.01.008] [Citation(s) in RCA: 123] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Revised: 01/14/2017] [Accepted: 01/23/2017] [Indexed: 01/18/2023]
Abstract
Recent studies on brain imaging analysis witnessed the core roles of machine learning techniques in computer-assisted intervention for brain disease diagnosis. Of various machine-learning techniques, sparse regression models have proved their effectiveness in handling high-dimensional data but with a small number of training samples, especially in medical problems. In the meantime, deep learning methods have been making great successes by outperforming the state-of-the-art performances in various applications. In this paper, we propose a novel framework that combines the two conceptually different methods of sparse regression and deep learning for Alzheimer's disease/mild cognitive impairment diagnosis and prognosis. Specifically, we first train multiple sparse regression models, each of which is trained with different values of a regularization control parameter. Thus, our multiple sparse regression models potentially select different feature subsets from the original feature set; thereby they have different powers to predict the response values, i.e., clinical label and clinical scores in our work. By regarding the response values from our sparse regression models as target-level representations, we then build a deep convolutional neural network for clinical decision making, which thus we call 'Deep Ensemble Sparse Regression Network.' To our best knowledge, this is the first work that combines sparse regression models with deep neural network. In our experiments with the ADNI cohort, we validated the effectiveness of the proposed method by achieving the highest diagnostic accuracies in three classification tasks. We also rigorously analyzed our results and compared with the previous studies on the ADNI cohort in the literature.
Collapse
Affiliation(s)
- Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea.
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea
| | - Dinggang Shen
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea; Biomedical Research Imaging Center and Department of Radiology, University of North Carolina at Chapel Hill, NC 27599, USA
| |
Collapse
|
43
|
Zhu X, Suk HI, Thung KH, Zhu Y, Wu G, Shen D. Joint Discriminative and Representative Feature Selection for Alzheimer's Disease Diagnosis. Mach Learn Med Imaging 2016; 10019:77-85. [PMID: 28956028 PMCID: PMC5612439 DOI: 10.1007/978-3-319-47157-0_10] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Neuroimaging data have been widely used to derive possible biomarkers for Alzheimer's Disease (AD) diagnosis. As only certain brain regions are related to AD progression, many feature selection methods have been proposed to identify informative features (i.e., brain regions) to build an accurate prediction model. These methods mostly only focus on the feature-target relationship to select features which are discriminative to the targets (e.g., diagnosis labels). However, since the brain regions are anatomically and functionally connected, there could be useful intrinsic relationships among features. In this paper, by utilizing both the feature-target and feature-feature relationships, we propose a novel sparse regression model to select informative features which are discriminative to the targets and also representative to the features. We argue that the features which are representative (i.e., can be used to represent many other features) are important, as they signify strong "connection" with other ROIs, and could be related to the disease progression. We use our model to select features for both binary and multi-class classification tasks, and the experimental results on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset show that the proposed method outperforms other comparison methods considered in this work.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seongbuk-gu, Republic of Korea
| | - Kim-Han Thung
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Yingying Zhu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Guorong Wu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
44
|
Zhu X, Suk HI, Huang H, Shen D. Structured Sparse Low-Rank Regression Model for Brain-Wide and Genome-Wide Associations. Med Image Comput Comput Assist Interv 2016; 9900:344-352. [PMID: 28530001 PMCID: PMC5436308 DOI: 10.1007/978-3-319-46720-7_40] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
With the advances of neuroimaging techniques and genome sequences understanding, the phenotype and genotype data have been utilized to study the brain diseases (known as imaging genetics). One of the most important topics in image genetics is to discover the genetic basis of phenotypic markers and their associations. In such studies, the linear regression models have been playing an important role by providing interpretable results. However, due to their modeling characteristics, it is limited to effectively utilize inherent information among the phenotypes and genotypes, which are helpful for better understanding their associations. In this work, we propose a structured sparse low-rank regression method to explicitly consider the correlations within the imaging phenotypes and the genotypes simultaneously for Brain-Wide and Genome-Wide Association (BW-GWA) study. Specifically, we impose the low-rank constraint as well as the structured sparse constraint on both phenotypes and phenotypes. By using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we conducted experiments of predicting the phenotype data from genotype data and achieved performance improvement by 12.75 % on average in terms of the root-mean-square error over the state-of-the-art methods.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Republic of Korea
| | - Heng Huang
- Computer Science and Engineering, University of Texas at Arlington, Arlington, USA
| | - Dinggang Shen
- Department of Radiology and BRIC, University of North Carolina at Chapel Hill, Chapel Hill, USA
| |
Collapse
|
45
|
Park KH, Suk HI, Lee SW. Position-Independent Decoding of Movement Intention for Proportional Myoelectric Interfaces. IEEE Trans Neural Syst Rehabil Eng 2016; 24:928-939. [DOI: 10.1109/tnsre.2015.2481461] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
46
|
Kim KT, Suk HI, Lee SW. Commanding a Brain-Controlled Wheelchair Using Steady-State Somatosensory Evoked Potentials. IEEE Trans Neural Syst Rehabil Eng 2016; 26:654-665. [PMID: 27514060 DOI: 10.1109/tnsre.2016.2597854] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
In this work, we propose a novel brain-controlled wheelchair, one of the major applications of brain-machine interfaces (BMIs), that allows an individual with mobility impairments to perform daily living activities independently. Specifically, we propose to use a steady-state somatosensory evoked potential (SSSEP) paradigm, which elicits brain responses to tactile stimulation of specific frequencies, for a user's intention to control a wheelchair. In our system, a user had three possible commands by concentrating on one of three vibration stimuli, which were attached to the left-hand, right-hand, and right-foot, to selectively control the wheelchair. The three stimuli were associated with three wheelchair commands: turn-left, turn-right, and move-forward. From a machine learning perspective, we also devise a novel feature representation by combining spatial and spectral characteristics of brain signals. In order to validate the effectiveness of the proposed SSSEP-based system, we considered two different tasks: 1) a simple obstacle-avoidance task within a limited time and; 2) a driving task along the predefined trajectory of about 40 m length, where there were a narrow pathway, a door, and obstacles. In both experiments, we recruited 12 subjects and compared the average time of motor imagery (MI) and SSSEP-based controls to complete the task. With the SSSEP-based control, all subjects successfully completed the task without making any collision while four subjects failed it with MI-based control. It is also noteworthy that in terms of the average time to complete the task, the SSSEP-based control outperformed the MI-based control. In the other more challenging task, all subjects successfully reached the target location.
Collapse
|
47
|
Li Z, Suk HI, Shen D, Li L. Sparse Multi-Response Tensor Regression for Alzheimer's Disease Study With Multivariate Clinical Assessments. IEEE Trans Med Imaging 2016; 35:1927-1936. [PMID: 26960221 PMCID: PMC5154176 DOI: 10.1109/tmi.2016.2538289] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Alzheimer's disease (AD) is a progressive and irreversible neurodegenerative disorder that has recently seen serious increase in the number of affected subjects. In the last decade, neuroimaging has been shown to be a useful tool to understand AD and its prodromal stage, amnestic mild cognitive impairment (MCI). The majority of AD/MCI studies have focused on disease diagnosis, by formulating the problem as classification with a binary outcome of AD/MCI or healthy controls. There have recently emerged studies that associate image scans with continuous clinical scores that are expected to contain richer information than a binary outcome. However, very few studies aim at modeling multiple clinical scores simultaneously, even though it is commonly conceived that multivariate outcomes provide correlated and complementary information about the disease pathology. In this article, we propose a sparse multi-response tensor regression method to model multiple outcomes jointly as well as to model multiple voxels of an image jointly. The proposed method is particularly useful to both infer clinical scores and thus disease diagnosis, and to identify brain subregions that are highly relevant to the disease outcomes. We conducted experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, and showed that the proposed method enhances the performance and clearly outperforms the competing solutions.
Collapse
Affiliation(s)
- Zhou Li
- Department of Statistics, North Carolina State University, Raleigh, NC 27695 USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea
| | - Dinggang Shen
- Biomedical Research Imaging Center (BRIC) and Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA, and also with the Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, South Korea
| | - Lexin Li
- Division of Biostatistics, University of California at Berkeley, Berkeley, CA 94720 USA
| |
Collapse
|
48
|
Abstract
The high feature-dimension and low sample-size problem is one of the major challenges in the study of computer-aided Alzheimer's disease (AD) diagnosis. To circumvent this problem, feature selection and subspace learning have been playing core roles in the literature. Generally, feature selection methods are preferable in clinical applications due to their ease for interpretation, but subspace learning methods can usually achieve more promising results. In this paper, we combine two different methodological approaches to discriminative feature selection in a unified framework. Specifically, we utilize two subspace learning methods, namely, linear discriminant analysis and locality preserving projection, which have proven their effectiveness in a variety of fields, to select class-discriminative and noise-resistant features. Unlike previous methods in neuroimaging studies that mostly focused on a binary classification, the proposed feature selection method is further applicable for multiclass classification in AD diagnosis. Extensive experiments on the Alzheimer's disease neuroimaging initiative dataset showed the effectiveness of the proposed method over other state-of-the-art methods.
Collapse
Affiliation(s)
- Xiaofeng Zhu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center (BRIC), The University of North Carolina at Chapel Hill, NC, USA
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | | |
Collapse
|
49
|
Suk HI, Wee CY, Lee SW, Shen D. State-space model with deep learning for functional dynamics estimation in resting-state fMRI. Neuroimage 2016; 129:292-307. [PMID: 26774612 DOI: 10.1016/j.neuroimage.2016.01.005] [Citation(s) in RCA: 153] [Impact Index Per Article: 19.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2015] [Revised: 01/02/2016] [Accepted: 01/04/2016] [Indexed: 12/16/2022] Open
Abstract
Studies on resting-state functional Magnetic Resonance Imaging (rs-fMRI) have shown that different brain regions still actively interact with each other while a subject is at rest, and such functional interaction is not stationary but changes over time. In terms of a large-scale brain network, in this paper, we focus on time-varying patterns of functional networks, i.e., functional dynamics, inherent in rs-fMRI, which is one of the emerging issues along with the network modelling. Specifically, we propose a novel methodological architecture that combines deep learning and state-space modelling, and apply it to rs-fMRI based Mild Cognitive Impairment (MCI) diagnosis. We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks. Given the embedded functional features, we then use a Hidden Markov Model (HMM) to estimate dynamic characteristics of functional networks inherent in rs-fMRI via internal states, which are unobservable but can be inferred from observations statistically. By building a generative model with an HMM, we estimate the likelihood of the input features of rs-fMRI as belonging to the corresponding status, i.e., MCI or normal healthy control, based on which we identify the clinical label of a testing subject. In order to validate the effectiveness of the proposed method, we performed experiments on two different datasets and compared with state-of-the-art methods in the literature. We also analyzed the functional networks learned by DAE, estimated the functional connectivities by decoding hidden states in HMM, and investigated the estimated functional connectivities by means of a graph-theoretic approach.
Collapse
Affiliation(s)
- Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea.
| | - Chong-Yaw Wee
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea
| | - Dinggang Shen
- Department of Brain and Cognitive Engineering, Korea University, Republic of Korea; Biomedical Research Imaging Center, Department of Radiology, University of North Carolina at Chapel Hill, USA
| |
Collapse
|
50
|
|