1
|
Mu S, Lu W, Yu G, Zheng L, Qiu J. Deep learning-based grading of white matter hyperintensities enables identification of potential markers in multi-sequence MRI data. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107904. [PMID: 37924768 DOI: 10.1016/j.cmpb.2023.107904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 10/06/2023] [Accepted: 10/27/2023] [Indexed: 11/06/2023]
Abstract
BACKGROUND White matter hyperintensities (WMHs) are widely-seen in the aging population, which are associated with cerebrovascular risk factors and age-related cognitive decline. At present, structural atrophy and functional alterations coexisted with WMHs lacks comprehensive investigation. This study developed a WMHs risk prediction model to evaluate WHMs according to Fazekas scales, and to locate potential regions with high risks across the entire brain. METHODS We developed a WMHs risk prediction model, which consisted of the following steps: T2 fluid attenuated inversion recovery (T2-FLAIR) image of each participant was firstly segmented into 1000 tiles with the size of 32 × 32 × 1, features from the tiles were extracted using the ResNet18-based feature extractor, and then a 1D convolutional neural network (CNN) was used to score all tiles based on the extracted features. Finally, a multi-layer perceptron (MLP) was constructed to predict the Fazekas scales based on the tile scores. The proposed model was trained using T2-FLAIR images, we selected tiles with abnormal scores in the test set after prediction, and evaluated their corresponding gray matter (GM) volume, white matter (WM) volume, fractional anisotropy (FA), mean diffusivity (MD), and cerebral blood flow (CBF) via longitudinal and multi-sequence Magnetic Resonance Imaging (MRI) data analysis. RESULTS The proposed WMHs risk prediction model could accurately predict the Fazekas ratings based on the tile scores from T2-FLAIR MRI images with accuracy of 0.656, 0.621 in training data set and test set, respectively. The longitudinal MRI validation revealed that most of the high-risk tiles predicted by the WMHs risk prediction model in the baseline images had WMHs in the corresponding positions in the longitudinal images. The validation on multi-sequence MRI demonstrated that WMHs were associated with GM and WM atrophies, WM micro-structural and perfusion alterations in high-risk tiles, and multi-modal MRI measures of most high-risk tiles showed significant associations with Mini Mental State Examination (MMSE) score. CONCLUSION Our proposed WMHs risk prediction model can not only accurately evaluate WMH severities according to Fazekas scales, but can also uncover potential markers of WMHs across modalities. The WMHs risk prediction model has the potential to be used for the early detection of WMH-related alterations in the entire brain and WMH-induced cognitive decline.
Collapse
Affiliation(s)
- Si Mu
- College of Mechanical and Electronic Engineering, Shandong Agricultural University, Tai'an, Shandong, 271000, China
| | - Weizhao Lu
- Department of Radiology, the Second Affiliated Hospital of Shandong First Medical University, Tai'an, Shandong, 271000, China
| | - Guanghui Yu
- Department of Radiology, the Second Affiliated Hospital of Shandong First Medical University, Tai'an, Shandong, 271000, China
| | - Lei Zheng
- Department of Radiology, Rushan Hospital of Chinese Medicine, Rushan, Shandong, 264500, China.
| | - Jianfeng Qiu
- School of Radiology, Shandong First Medical University & Shandong Academy of Medicine Sciences, Tai'an, Shandong, 271000, China; Science and Technology Innovation Center, Shandong First Medical University & Shandong Academy of Medical Sciences, Jinan, 250000, China.
| |
Collapse
|
2
|
Wang R, Bashyam V, Yang Z, Yu F, Tassopoulou V, Chintapalli SS, Skampardoni I, Sreepada LP, Sahoo D, Nikita K, Abdulkadir A, Wen J, Davatzikos C. Applications of generative adversarial networks in neuroimaging and clinical neuroscience. Neuroimage 2023; 269:119898. [PMID: 36702211 PMCID: PMC9992336 DOI: 10.1016/j.neuroimage.2023.119898] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/16/2022] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Generative adversarial networks (GANs) are one powerful type of deep learning models that have been successfully utilized in numerous fields. They belong to the broader family of generative methods, which learn to generate realistic data with a probabilistic model by learning distributions from real samples. In the clinical context, GANs have shown enhanced capabilities in capturing spatially complex, nonlinear, and potentially subtle disease effects compared to traditional generative methods. This review critically appraises the existing literature on the applications of GANs in imaging studies of various neurological conditions, including Alzheimer's disease, brain tumors, brain aging, and multiple sclerosis. We provide an intuitive explanation of various GAN methods for each application and further discuss the main challenges, open questions, and promising future directions of leveraging GANs in neuroimaging. We aim to bridge the gap between advanced deep learning methods and neurology research by highlighting how GANs can be leveraged to support clinical decision making and contribute to a better understanding of the structural and functional patterns of brain diseases.
Collapse
Affiliation(s)
- Rongguang Wang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA.
| | - Vishnu Bashyam
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Zhijian Yang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Fanyang Yu
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Vasiliki Tassopoulou
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sai Spandana Chintapalli
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Ioanna Skampardoni
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Lasya P Sreepada
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Dushyant Sahoo
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Konstantina Nikita
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Ahmed Abdulkadir
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Junhao Wen
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Christos Davatzikos
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.
| |
Collapse
|
3
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
4
|
Prediction of Lung Nodule Progression with an Uncertainty-Aware Hierarchical Probabilistic Network. Diagnostics (Basel) 2022; 12:diagnostics12112639. [DOI: 10.3390/diagnostics12112639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 10/21/2022] [Accepted: 10/24/2022] [Indexed: 11/16/2022] Open
Abstract
Predicting whether a lung nodule will grow, remain stable or regress over time, especially early in its follow-up, would help doctors prescribe personalized treatments and better surgical planning. However, the multifactorial nature of lung tumour progression hampers the identification of growth patterns. In this work, we propose a deep hierarchical generative and probabilistic network that, given an initial image of the nodule, predicts whether it will grow, quantifies its future size and provides its expected semantic appearance at a future time. Unlike previous solutions, our approach also estimates the uncertainty in the predictions from the intrinsic noise in medical images and the inter-observer variability in the annotations. The evaluation of this method on an independent test set reported a future tumour growth size mean absolute error of 1.74 mm, a nodule segmentation Dice’s coefficient of 78% and a tumour growth accuracy of 84% on predictions made up to 24 months ahead. Due to the lack of similar methods for providing future lung tumour growth predictions, along with their associated uncertainty, we adapted equivalent deterministic and alternative generative networks (i.e., probabilistic U-Net, Bayesian test dropout and Pix2Pix). Our method outperformed all these methods, corroborating the adequacy of our approach.
Collapse
|
5
|
Li X, Jiang Y, Rodriguez-Andina JJ, Luo H, Yin S, Kaynak O. When medical images meet generative adversarial network: recent development and research opportunities. DISCOVER ARTIFICIAL INTELLIGENCE 2021. [DOI: 10.1007/s44163-021-00006-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
AbstractDeep learning techniques have promoted the rise of artificial intelligence (AI) and performed well in computer vision. Medical image analysis is an important application of deep learning, which is expected to greatly reduce the workload of doctors, contributing to more sustainable health systems. However, most current AI methods for medical image analysis are based on supervised learning, which requires a lot of annotated data. The number of medical images available is usually small and the acquisition of medical image annotations is an expensive process. Generative adversarial network (GAN), an unsupervised method that has become very popular in recent years, can simulate the distribution of real data and reconstruct approximate real data. GAN opens some exciting new ways for medical image generation, expanding the number of medical images available for deep learning methods. Generated data can solve the problem of insufficient data or imbalanced data categories. Adversarial training is another contribution of GAN to medical imaging that has been applied to many tasks, such as classification, segmentation, or detection. This paper investigates the research status of GAN in medical images and analyzes several GAN methods commonly applied in this area. The study addresses GAN application for both medical image synthesis and adversarial learning for other medical image tasks. The open challenges and future research directions are also discussed.
Collapse
|
6
|
Rastogi A, Weissert R, Bhaskar SMM. Emerging role of white matter lesions in cerebrovascular disease. Eur J Neurosci 2021; 54:5531-5559. [PMID: 34233379 DOI: 10.1111/ejn.15379] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2021] [Revised: 06/26/2021] [Accepted: 06/26/2021] [Indexed: 12/12/2022]
Abstract
White matter lesions have been implicated in the setting of stroke, dementia, intracerebral haemorrhage, several other cerebrovascular conditions, migraine, various neuroimmunological diseases like multiple sclerosis, disorders of metabolism, mitochondrial diseases and others. While much is understood vis a vis neuroimmunological conditions, our knowledge of the pathophysiology of these lesions, and their role in, and implications to, management of cerebrovascular diseases or stroke, especially in the elderly, are limited. Several clinical assessment tools are available for delineating white matter lesions in clinical practice. However, their incorporation into clinical decision-making and specifically prognosis and management of patients is suboptimal for use in standards of care. This article sought to provide an overview of the current knowledge and recent advances on pathophysiology, as well as clinical and radiological assessment, of white matter lesions with a focus on its development, progression and clinical implications in cerebrovascular diseases. Key indications for clinical practice and recommendations on future areas of research are also discussed. Finally, a conceptual proposal on putative mechanisms underlying pathogenesis of white matter lesions in cerebrovascular disease has been presented. Understanding of pathophysiology of white matter lesions and how they mediate outcomes is important to develop therapeutic strategies.
Collapse
Affiliation(s)
- Aarushi Rastogi
- South Western Sydney Clinical School, University of New South Wales (UNSW), Liverpool, New South Wales, Australia.,Neurovascular Imaging Laboratory, Clinical Sciences Stream, Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia
| | - Robert Weissert
- Department of Neurology, Regensburg University Hospital, University of Regensburg, Regensburg, Germany
| | - Sonu Menachem Maimonides Bhaskar
- South Western Sydney Clinical School, University of New South Wales (UNSW), Liverpool, New South Wales, Australia.,Neurovascular Imaging Laboratory, Clinical Sciences Stream, Ingham Institute for Applied Medical Research, Sydney, New South Wales, Australia.,NSW Brain Clot Bank, NSW Health Pathology, Sydney, New South Wales, Australia.,Department of Neurology and Neurophysiology, Liverpool Hospital and South Western Sydney Local Health District, Sydney, New South Wales, Australia
| |
Collapse
|
7
|
Xia T, Chartsias A, Wang C, Tsaftaris SA. Learning to synthesise the ageing brain without longitudinal data. Med Image Anal 2021; 73:102169. [PMID: 34311421 DOI: 10.1016/j.media.2021.102169] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 07/01/2021] [Accepted: 07/09/2021] [Indexed: 12/21/2022]
Abstract
How will my face look when I get older? Or, for a more challenging question: How will my brain look when I get older? To answer this question one must devise (and learn from data) a multivariate auto-regressive function which given an image and a desired target age generates an output image. While collecting data for faces may be easier, collecting longitudinal brain data is not trivial. We propose a deep learning-based method that learns to simulate subject-specific brain ageing trajectories without relying on longitudinal data. Our method synthesises images conditioned on two factors: age (a continuous variable), and status of Alzheimer's Disease (AD, an ordinal variable). With an adversarial formulation we learn the joint distribution of brain appearance, age and AD status, and define reconstruction losses to address the challenging problem of preserving subject identity. We compare with several benchmarks using two widely used datasets. We evaluate the quality and realism of synthesised images using ground-truth longitudinal data and a pre-trained age predictor. We show that, despite the use of cross-sectional data, our model learns patterns of gray matter atrophy in the middle temporal gyrus in patients with AD. To demonstrate generalisation ability, we train on one dataset and evaluate predictions on the other. In conclusion, our model shows an ability to separate age, disease influence and anatomy using only 2D cross-sectional data that should be useful in large studies into neurodegenerative disease, that aim to combine several data sources. To facilitate such future studies by the community at large our code is made available at https://github.com/xiat0616/BrainAgeing.
Collapse
Affiliation(s)
- Tian Xia
- Institute for Digital Communications, School of Engineering, University of Edinburgh, West Mains Rd, Edinburgh EH9 3FB, UK.
| | - Agisilaos Chartsias
- Institute for Digital Communications, School of Engineering, University of Edinburgh, West Mains Rd, Edinburgh EH9 3FB, UK
| | - Chengjia Wang
- The BHF Centre for Cardiovascular Science, Edinburgh EH16 4TJ, UK
| | - Sotirios A Tsaftaris
- Institute for Digital Communications, School of Engineering, University of Edinburgh, West Mains Rd, Edinburgh EH9 3FB, UK; The Alan Turing Institute, London NW1 2DB, UK
| | | |
Collapse
|