1
|
Roca V, Kuchcinski G, Pruvo JP, Manouvriez D, Lopes R. IGUANe: A 3D generalizable CycleGAN for multicenter harmonization of brain MR images. Med Image Anal 2025; 99:103388. [PMID: 39546981 DOI: 10.1016/j.media.2024.103388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2024] [Revised: 10/31/2024] [Accepted: 11/03/2024] [Indexed: 11/17/2024]
Abstract
In MRI studies, the aggregation of imaging data from multiple acquisition sites enhances sample size but may introduce site-related variabilities that hinder consistency in subsequent analyses. Deep learning methods for image translation have emerged as a solution for harmonizing MR images across sites. In this study, we introduce IGUANe (Image Generation with Unified Adversarial Networks), an original 3D model that leverages the strengths of domain translation and straightforward application of style transfer methods for multicenter brain MR image harmonization. IGUANe extends CycleGAN by integrating an arbitrary number of domains for training through a many-to-one architecture. The framework based on domain pairs enables the implementation of sampling strategies that prevent confusion between site-related and biological variabilities. During inference, the model can be applied to any image, even from an unknown acquisition site, making it a universal generator for harmonization. Trained on a dataset comprising T1-weighted images from 11 different scanners, IGUANe was evaluated on data from unseen sites. The assessments included the transformation of MR images with traveling subjects, the preservation of pairwise distances between MR images within domains, the evolution of volumetric patterns related to age and Alzheimer's disease (AD), and the performance in age regression and patient classification tasks. Comparisons with other harmonization and normalization methods suggest that IGUANe better preserves individual information in MR images and is more suitable for maintaining and reinforcing variabilities related to age and AD. Future studies may further assess IGUANe in other multicenter contexts, either using the same model or retraining it for applications to different image modalities. Codes and the trained IGUANe model are available at https://github.com/RocaVincent/iguane_harmonization.git.
Collapse
Affiliation(s)
- Vincent Roca
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France.
| | - Grégory Kuchcinski
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France; Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neuroscience & Cognition, F-59000 Lille, France; CHU Lille, Département de Neuroradiologie, F-59000 Lille, France
| | - Jean-Pierre Pruvo
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France; Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neuroscience & Cognition, F-59000 Lille, France; CHU Lille, Département de Neuroradiologie, F-59000 Lille, France
| | - Dorian Manouvriez
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
| | - Renaud Lopes
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France; Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neuroscience & Cognition, F-59000 Lille, France; CHU Lille, Département de Médecine Nucléaire, F-59000 Lille, France
| |
Collapse
|
2
|
Shahzadi M, Rafique H, Waheed A, Naz H, Waheed A, Zokirova FR, Khan H. Artificial intelligence for chimeric antigen receptor-based therapies: a comprehensive review of current applications and future perspectives. Ther Adv Vaccines Immunother 2024; 12:25151355241305856. [PMID: 39691280 PMCID: PMC11650588 DOI: 10.1177/25151355241305856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2024] [Accepted: 11/18/2024] [Indexed: 12/19/2024] Open
Abstract
Using artificial intelligence (AI) to enhance chimeric antigen receptor (CAR)-based therapies' design, production, and delivery is a novel and promising approach. This review provides an overview of the current applications and challenges of AI for CAR-based therapies and suggests some directions for future research and development. This paper examines some of the recent advances of AI for CAR-based therapies, for example, using deep learning (DL) to design CARs that target multiple antigens and avoid antigen escape; using natural language processing to extract relevant information from clinical reports and literature; using computer vision to analyze the morphology and phenotype of CAR cells; using reinforcement learning to optimize the dose and schedule of CAR infusion; and using AI to predict the efficacy and toxicity of CAR-based therapies. These applications demonstrate the potential of AI to improve the quality and efficiency of CAR-based therapies and to provide personalized and precise treatments for cancer patients. However, there are also some challenges and limitations of using AI for CAR-based therapies, for example, the lack of high-quality and standardized data; the need for validation and verification of AI models; the risk of bias and error in AI outputs; the ethical, legal, and social issues of using AI for health care; and the possible impact of AI on the human role and responsibility in cancer immunotherapy. It is important to establish a multidisciplinary collaboration among researchers, clinicians, regulators, and patients to address these challenges and to ensure the safe and responsible use of AI for CAR-based therapies.
Collapse
Affiliation(s)
- Muqadas Shahzadi
- Department of Zoology, Faculty of Life Sciences, University of Okara, Okara, Pakistan
| | - Hamad Rafique
- College of Food Engineering and Nutritional Science, Shaanxi Normal University, Xi’an, Shaanxi, China
| | - Ahmad Waheed
- Department of Zoology, Faculty of Life Sciences, University of Okara, 2 KM Lahore Road, Renala Khurd, Okara 56130, Punjab, Pakistan
| | - Hina Naz
- Department of Zoology, Faculty of Life Sciences, University of Okara, Okara, Pakistan
| | - Atifa Waheed
- Department of Biology, Faculty of Life Sciences, University of Okara, Okara, Pakistan
| | | | - Humera Khan
- Department of Biochemistry, Sahiwal Medical College, Sahiwal, Pakistan
| |
Collapse
|
3
|
Jonnalagedda P, Weinberg B, Min TL, Bhanu S, Bhanu B. Computational modeling of tumor invasion from limited and diverse data in Glioblastoma. Comput Med Imaging Graph 2024; 117:102436. [PMID: 39342741 DOI: 10.1016/j.compmedimag.2024.102436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2024] [Revised: 05/25/2024] [Accepted: 09/17/2024] [Indexed: 10/01/2024]
Abstract
For diseases with high morbidity rates such as Glioblastoma Multiforme, the prognostic and treatment planning pipeline requires a comprehensive analysis of imaging, clinical, and molecular data. Many mutations have been shown to correlate strongly with the median survival rate and response to therapy of patients. Studies have demonstrated that these mutations manifest as specific visual biomarkers in tumor imaging modalities such as MRI. To minimize the number of invasive procedures on a patient and for the overall resource optimization for the prognostic and treatment planning process, the correlation of imaging and molecular features has garnered much interest. While the tumor mass is the most significant feature, the impacted tissue surrounding the tumor is also a significant biomarker contributing to the visual manifestation of mutations - which has not been studied as extensively. The pattern of tumor growth impacts the surrounding tissue accordingly, which is a reflection of tumor properties as well. Modeling how the tumor growth impacts the surrounding tissue can reveal important information about the patterns of tumor enhancement, which in turn has significant diagnostic and prognostic value. This paper presents the first work to automate the computational modeling of the impacted tissue surrounding the tumor using generative deep learning. The paper isolates and quantifies the impact of the Tumor Invasion (TI) on surrounding tissue based on change in mutation status, subsequently assessing its prognostic value. Furthermore, a TI Generative Adversarial Network (TI-GAN) is proposed to model the tumor invasion properties. Extensive qualitative and quantitative analyses, cross-dataset testing, and radiologist blind tests are carried out to demonstrate that TI-GAN can realistically model the tumor invasion under practical challenges of medical datasets such as limited data and high intra-class heterogeneity.
Collapse
Affiliation(s)
- Padmaja Jonnalagedda
- Department of Electrical and Computer Engineering, University of California, Riverside, United States of America.
| | - Brent Weinberg
- Department of Radiology and Imaging Sciences, Emory University, Atlanta GA, United States of America
| | - Taejin L Min
- Department of Radiology and Imaging Sciences, Emory University, Atlanta GA, United States of America
| | - Shiv Bhanu
- Department of Radiology, Riverside Community Hospital, Riverside CA, United States of America
| | - Bir Bhanu
- Department of Electrical and Computer Engineering, University of California, Riverside, United States of America
| |
Collapse
|
4
|
Huang L, Zhou J, Jiao J, Zhou S, Chang C, Wang Y, Guo Y. Standardization of ultrasound images across various centers: M2O-DiffGAN bridging the gaps among unpaired multi-domain ultrasound images. Med Image Anal 2024; 95:103187. [PMID: 38705056 DOI: 10.1016/j.media.2024.103187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 02/20/2024] [Accepted: 04/22/2024] [Indexed: 05/07/2024]
Abstract
Domain shift problem is commonplace for ultrasound image analysis due to difference imaging setting and diverse medical centers, which lead to poor generalizability of deep learning-based methods. Multi-Source Domain Transformation (MSDT) provides a promising way to tackle the performance degeneration caused by the domain shift, which is more practical and challenging compared to conventional single-source transformation tasks. An effective unsupervised domain combination strategy is highly required to handle multiple domains without annotations. Fidelity and quality of generated images are also important to ensure the accuracy of computer-aided diagnosis. However, existing MSDT approaches underperform in above two areas. In this paper, an efficient domain transformation model named M2O-DiffGAN is introduced to achieve a unified mapping from multiple unlabeled source domains to the target domain. A cycle-consistent "many-to-one" adversarial learning architecture is introduced to model various unlabeled domains jointly. A condition adversarial diffusion process is employed to generate images with high-fidelity, combining an adversarial projector to capture reverse transition probabilities over large step sizes for accelerating sampling. Considering the limited perceptual information of ultrasound images, an ultrasound-specific content loss helps to capture more perceptual features for synthesizing high-quality ultrasound images. Massive comparisons on six clinical datasets covering thyroid, carotid and breast demonstrate the superiority of the M2O-DiffGAN in the performance of bridging the domain gaps and enlarging the generalization of downstream analysis methods compared to state-of-the-art algorithms. It improves the mean MI, Bhattacharyya Coefficient, dice and IoU assessments by 0.390, 0.120, 0.245 and 0.250, presenting promising clinical applications.
Collapse
Affiliation(s)
- Lihong Huang
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Jin Zhou
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Jing Jiao
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China
| | - Shichong Zhou
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Cai Chang
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China.
| | - Yi Guo
- Department of Electronic Engineering, School of Information Science and Technology, Fudan University, Shanghai, China; Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, China.
| |
Collapse
|
5
|
Hognon C, Conze PH, Bourbonne V, Gallinato O, Colin T, Jaouen V, Visvikis D. Contrastive image adaptation for acquisition shift reduction in medical imaging. Artif Intell Med 2024; 148:102747. [PMID: 38325919 DOI: 10.1016/j.artmed.2023.102747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2022] [Revised: 10/21/2023] [Accepted: 12/10/2023] [Indexed: 02/09/2024]
Abstract
The domain shift, or acquisition shift in medical imaging, is responsible for potentially harmful differences between development and deployment conditions of medical image analysis techniques. There is a growing need in the community for advanced methods that could mitigate this issue better than conventional approaches. In this paper, we consider configurations in which we can expose a learning-based pixel level adaptor to a large variability of unlabeled images during its training, i.e. sufficient to span the acquisition shift expected during the training or testing of a downstream task model. We leverage the ability of convolutional architectures to efficiently learn domain-agnostic features and train a many-to-one unsupervised mapping between a source collection of heterogeneous images from multiple unknown domains subjected to the acquisition shift and a homogeneous subset of this source set of lower cardinality, potentially constituted of a single image. To this end, we propose a new cycle-free image-to-image architecture based on a combination of three loss functions : a contrastive PatchNCE loss, an adversarial loss and an edge preserving loss allowing for rich domain adaptation to the target image even under strong domain imbalance and low data regimes. Experiments support the interest of the proposed contrastive image adaptation approach for the regularization of downstream deep supervised segmentation and cross-modality synthesis models.
Collapse
Affiliation(s)
- Clément Hognon
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France; SOPHiA Genetics, Pessac, France
| | - Pierre-Henri Conze
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| | - Vincent Bourbonne
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| | | | | | - Vincent Jaouen
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France.
| | - Dimitris Visvikis
- UMR U1101 Inserm LaTIM, IMT Atlantique, Université de Bretagne Occidentale, France
| |
Collapse
|
6
|
Roca V, Kuchcinski G, Pruvo JP, Manouvriez D, Leclerc X, Lopes R. A three-dimensional deep learning model for inter-site harmonization of structural MR images of the brain: Extensive validation with a multicenter dataset. Heliyon 2023; 9:e22647. [PMID: 38107313 PMCID: PMC10724680 DOI: 10.1016/j.heliyon.2023.e22647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 10/03/2023] [Accepted: 11/15/2023] [Indexed: 12/19/2023] Open
Abstract
In multicenter MRI studies, pooling the imaging data can introduce site-related variabilities and can therefore bias the subsequent analyses. To harmonize the intensity distributions of brain MR images in a multicenter dataset, unsupervised deep learning methods can be employed. Here, we developed a model based on cycle-consistent adversarial networks for the harmonization of T1-weighted brain MR images. In contrast to previous works, it was designed to process three-dimensional whole-brain images in a stable manner while optimizing computation resources. Using six different MRI datasets for healthy adults (n=1525 in total) with different acquisition parameters, we tested the model in (i) three pairwise harmonizations with site effects of various sizes, (ii) an overall harmonization of the six datasets with different age distributions, and (iii) a traveling-subject dataset. Our results for intensity distributions, brain volumes, image quality metrics and radiomic features indicated that the MRI characteristics at the various sites had been effectively homogenized. Next, brain age prediction experiments and the observed correlation between the gray-matter volume and age showed that thanks to an appropriate training strategy and despite biological differences between the dataset populations, the model reinforced biological patterns. Furthermore, radiologic analyses of the harmonized images attested to the conservation of the radiologic information in the original images. The robustness of the harmonization model (as judged with various datasets and metrics) demonstrates its potential for application in retrospective multicenter studies.
Collapse
Affiliation(s)
- Vincent Roca
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
| | - Grégory Kuchcinski
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neurosciences & Cognition, F-59000 Lille, France
- CHU Lille, Department of Neuroradiology, F-59000 Lille, France
| | - Jean-Pierre Pruvo
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neurosciences & Cognition, F-59000 Lille, France
- CHU Lille, Department of Neuroradiology, F-59000 Lille, France
| | - Dorian Manouvriez
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
| | - Xavier Leclerc
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neurosciences & Cognition, F-59000 Lille, France
- CHU Lille, Department of Neuroradiology, F-59000 Lille, France
| | - Renaud Lopes
- Univ. Lille, CNRS, Inserm, CHU Lille, Institut Pasteur de Lille, US 41 - UAR 2014 - PLBS, F-59000 Lille, France
- Univ. Lille, Inserm, CHU Lille, U1172 - LilNCog - Lille Neurosciences & Cognition, F-59000 Lille, France
| |
Collapse
|
7
|
van Tulder G, de Bruijne M. Unpaired, unsupervised domain adaptation assumes your domains are already similar. Med Image Anal 2023; 87:102825. [PMID: 37116296 DOI: 10.1016/j.media.2023.102825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 03/30/2023] [Accepted: 04/17/2023] [Indexed: 04/30/2023]
Abstract
Unsupervised domain adaptation is a popular method in medical image analysis, but it can be tricky to make it work: without labels to link the domains, domains must be matched using feature distributions. If there is no additional information, this often leaves a choice between multiple possibilities to map the data that may be equally likely but not equally correct. In this paper we explore the fundamental problems that may arise in unsupervised domain adaptation, and discuss conditions that might still make it work. Focusing on medical image analysis, we argue that images from different domains may have similar class balance, similar intensities, similar spatial structure, or similar textures. We demonstrate how these implicit conditions can affect domain adaptation performance in experiments with synthetic data, MNIST digits, and medical images. We observe that practical success of unsupervised domain adaptation relies on existing similarities in the data, and is anything but guaranteed in the general case. Understanding these implicit assumptions is a key step in identifying potential problems in domain adaptation and improving the reliability of the results.
Collapse
Affiliation(s)
- Gijs van Tulder
- Data Science group, Faculty of Science, Radboud University, Postbus 9010, 6500 GL Nijmegen, The Netherlands; Biomedical Imaging Group, Erasmus MC, Postbus 2040, 3000 CA Rotterdam, The Netherlands.
| | - Marleen de Bruijne
- Biomedical Imaging Group, Erasmus MC, Postbus 2040, 3000 CA Rotterdam, The Netherlands; Department of Computer Science, University of Copenhagen, Universitetsparken 1, 2100 Copenhagen, Denmark.
| |
Collapse
|
8
|
Wang R, Bashyam V, Yang Z, Yu F, Tassopoulou V, Chintapalli SS, Skampardoni I, Sreepada LP, Sahoo D, Nikita K, Abdulkadir A, Wen J, Davatzikos C. Applications of generative adversarial networks in neuroimaging and clinical neuroscience. Neuroimage 2023; 269:119898. [PMID: 36702211 PMCID: PMC9992336 DOI: 10.1016/j.neuroimage.2023.119898] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Revised: 12/16/2022] [Accepted: 01/21/2023] [Indexed: 01/25/2023] Open
Abstract
Generative adversarial networks (GANs) are one powerful type of deep learning models that have been successfully utilized in numerous fields. They belong to the broader family of generative methods, which learn to generate realistic data with a probabilistic model by learning distributions from real samples. In the clinical context, GANs have shown enhanced capabilities in capturing spatially complex, nonlinear, and potentially subtle disease effects compared to traditional generative methods. This review critically appraises the existing literature on the applications of GANs in imaging studies of various neurological conditions, including Alzheimer's disease, brain tumors, brain aging, and multiple sclerosis. We provide an intuitive explanation of various GAN methods for each application and further discuss the main challenges, open questions, and promising future directions of leveraging GANs in neuroimaging. We aim to bridge the gap between advanced deep learning methods and neurology research by highlighting how GANs can be leveraged to support clinical decision making and contribute to a better understanding of the structural and functional patterns of brain diseases.
Collapse
Affiliation(s)
- Rongguang Wang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA.
| | - Vishnu Bashyam
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Zhijian Yang
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Fanyang Yu
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Vasiliki Tassopoulou
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Sai Spandana Chintapalli
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Ioanna Skampardoni
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Lasya P Sreepada
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Dushyant Sahoo
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Konstantina Nikita
- School of Electrical and Computer Engineering, National Technical University of Athens, Athens, Greece
| | - Ahmed Abdulkadir
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Clinical Neurosciences, Lausanne University Hospital and University of Lausanne, Lausanne, Switzerland
| | - Junhao Wen
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA
| | - Christos Davatzikos
- Center for AI and Data Science for Integrated Diagnostics, University of Pennsylvania, Philadelphia, USA; Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, USA; Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, USA.
| |
Collapse
|
9
|
Wen G, Shim V, Holdsworth SJ, Fernandez J, Qiao M, Kasabov N, Wang A. Machine Learning for Brain MRI Data Harmonisation: A Systematic Review. Bioengineering (Basel) 2023; 10:bioengineering10040397. [PMID: 37106584 PMCID: PMC10135601 DOI: 10.3390/bioengineering10040397] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Revised: 03/16/2023] [Accepted: 03/21/2023] [Indexed: 04/29/2023] Open
Abstract
BACKGROUND Magnetic Resonance Imaging (MRI) data collected from multiple centres can be heterogeneous due to factors such as the scanner used and the site location. To reduce this heterogeneity, the data needs to be harmonised. In recent years, machine learning (ML) has been used to solve different types of problems related to MRI data, showing great promise. OBJECTIVE This study explores how well various ML algorithms perform in harmonising MRI data, both implicitly and explicitly, by summarising the findings in relevant peer-reviewed articles. Furthermore, it provides guidelines for the use of current methods and identifies potential future research directions. METHOD This review covers articles published through PubMed, Web of Science, and IEEE databases through June 2022. Data from studies were analysed based on the criteria of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Quality assessment questions were derived to assess the quality of the included publications. RESULTS a total of 41 articles published between 2015 and 2022 were identified and analysed. In the review, MRI data has been found to be harmonised either in an implicit (n = 21) or an explicit (n = 20) way. Three MRI modalities were identified: structural MRI (n = 28), diffusion MRI (n = 7) and functional MRI (n = 6). CONCLUSION Various ML techniques have been employed to harmonise different types of MRI data. There is currently a lack of consistent evaluation methods and metrics used across studies, and it is recommended that the issue be addressed in future studies. Harmonisation of MRI data using ML shows promises in improving performance for ML downstream tasks, while caution should be exercised when using ML-harmonised data for direct interpretation.
Collapse
Affiliation(s)
- Grace Wen
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
| | - Vickie Shim
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
| | - Samantha Jane Holdsworth
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
- Mātai Medical Research Institute, Tairāwhiti-Gisborne 4010, New Zealand
- Department of Anatomy & Medical Imaging, Faculty of Medical and Health Sciences, University of Auckland, Auckland 1142, New Zealand
| | - Justin Fernandez
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
| | - Miao Qiao
- Department of Computer Science, University of Auckland, Auckland 1142, New Zealand
| | - Nikola Kasabov
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland 1010, New Zealand
- Intelligent Systems Research Centre, Ulster University, Londonderry BT52 1SA, UK
- Institute for Information and Communication Technologies, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
| | - Alan Wang
- Auckland Bioengineering Institute, University of Auckland, Auckland 1142, New Zealand
- Centre for Brain Research, University of Auckland, Auckland 1142, New Zealand
- Department of Anatomy & Medical Imaging, Faculty of Medical and Health Sciences, University of Auckland, Auckland 1142, New Zealand
| |
Collapse
|
10
|
A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI. FUTURE INTERNET 2022. [DOI: 10.3390/fi14120351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
With the advances in brain imaging, magnetic resonance imaging (MRI) is evolving as a popular radiological tool in clinical diagnosis. Deep learning (DL) methods can detect abnormalities in brain images without an extensive manual feature extraction process. Generative adversarial network (GAN)-synthesized images have many applications in this field besides augmentation, such as image translation, registration, super-resolution, denoising, motion correction, segmentation, reconstruction, and contrast enhancement. The existing literature was reviewed systematically to understand the role of GAN-synthesized dummy images in brain disease diagnosis. Web of Science and Scopus databases were extensively searched to find relevant studies from the last 6 years to write this systematic literature review (SLR). Predefined inclusion and exclusion criteria helped in filtering the search results. Data extraction is based on related research questions (RQ). This SLR identifies various loss functions used in the above applications and software to process brain MRIs. A comparative study of existing evaluation metrics for GAN-synthesized images helps choose the proper metric for an application. GAN-synthesized images will have a crucial role in the clinical sector in the coming years, and this paper gives a baseline for other researchers in the field.
Collapse
|
11
|
A stability-enhanced CycleGAN for effective domain transformation of unpaired ultrasound images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103831] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
12
|
Chen X, Lei Y, Su J, Yang H, Ni W, Yu J, Gu Y, Mao Y. A Review of Artificial Intelligence in Cerebrovascular Disease Imaging: Applications and Challenges. Curr Neuropharmacol 2022; 20:1359-1382. [PMID: 34749621 PMCID: PMC9881077 DOI: 10.2174/1570159x19666211108141446] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 09/07/2021] [Accepted: 10/10/2021] [Indexed: 01/10/2023] Open
Abstract
BACKGROUND A variety of emerging medical imaging technologies based on artificial intelligence have been widely applied in many diseases, but they are still limitedly used in the cerebrovascular field even though the diseases can lead to catastrophic consequences. OBJECTIVE This work aims to discuss the current challenges and future directions of artificial intelligence technology in cerebrovascular diseases through reviewing the existing literature related to applications in terms of computer-aided detection, prediction and treatment of cerebrovascular diseases. METHODS Based on artificial intelligence applications in four representative cerebrovascular diseases including intracranial aneurysm, arteriovenous malformation, arteriosclerosis and moyamoya disease, this paper systematically reviews studies published between 2006 and 2021 in five databases: National Center for Biotechnology Information, Elsevier Science Direct, IEEE Xplore Digital Library, Web of Science and Springer Link. And three refinement steps were further conducted after identifying relevant literature from these databases. RESULTS For the popular research topic, most of the included publications involved computer-aided detection and prediction of aneurysms, while studies about arteriovenous malformation, arteriosclerosis and moyamoya disease showed an upward trend in recent years. Both conventional machine learning and deep learning algorithms were utilized in these publications, but machine learning techniques accounted for a larger proportion. CONCLUSION Algorithms related to artificial intelligence, especially deep learning, are promising tools for medical imaging analysis and will enhance the performance of computer-aided detection, prediction and treatment of cerebrovascular diseases.
Collapse
Affiliation(s)
- Xi Chen
- School of Information Science and Technology, Fudan University, Shanghai, China; ,These authors contributed equally to this work
| | - Yu Lei
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China,These authors contributed equally to this work
| | - Jiabin Su
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Heng Yang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Wei Ni
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Technology, Fudan University, Shanghai, China; ,Address correspondence to these authors at the School of Information Science and Technology, Fudan University, Shanghai 200433, China; Tel: +86 021 65643202; Fax: +86 021 65643202; E-mail: Department of Neurosurgery, Huashan Hospital of Fudan University, Shanghai 200040, China; Tel: +86 021 52889999; Fax: +86 021 62489191; E-mail:
| | - Yuxiang Gu
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China,Address correspondence to these authors at the School of Information Science and Technology, Fudan University, Shanghai 200433, China; Tel: +86 021 65643202; Fax: +86 021 65643202; E-mail: Department of Neurosurgery, Huashan Hospital of Fudan University, Shanghai 200040, China; Tel: +86 021 52889999; Fax: +86 021 62489191; E-mail:
| | - Ying Mao
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| |
Collapse
|
13
|
Marti-Bonmati L, Koh DM, Riklund K, Bobowicz M, Roussakis Y, Vilanova JC, Fütterer JJ, Rimola J, Mallol P, Ribas G, Miguel A, Tsiknakis M, Lekadir K, Tsakou G. Considerations for artificial intelligence clinical impact in oncologic imaging: an AI4HI position paper. Insights Imaging 2022; 13:89. [PMID: 35536446 PMCID: PMC9091068 DOI: 10.1186/s13244-022-01220-9] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 04/07/2022] [Indexed: 01/12/2023] Open
Abstract
To achieve clinical impact in daily oncological practice, emerging AI-based cancer imaging research needs to have clearly defined medical focus, AI methods, and outcomes to be estimated. AI-supported cancer imaging should predict major relevant clinical endpoints, aiming to extract associations and draw inferences in a fair, robust, and trustworthy way. AI-assisted solutions as medical devices, developed using multicenter heterogeneous datasets, should be targeted to have an impact on the clinical care pathway. When designing an AI-based research study in oncologic imaging, ensuring clinical impact in AI solutions requires careful consideration of key aspects, including target population selection, sample size definition, standards, and common data elements utilization, balanced dataset splitting, appropriate validation methodology, adequate ground truth, and careful selection of clinical endpoints. Endpoints may be pathology hallmarks, disease behavior, treatment response, or patient prognosis. Ensuring ethical, safety, and privacy considerations are also mandatory before clinical validation is performed. The Artificial Intelligence for Health Imaging (AI4HI) Clinical Working Group has discussed and present in this paper some indicative Machine Learning (ML) enabled decision-support solutions currently under research in the AI4HI projects, as well as the main considerations and requirements that AI solutions should have from a clinical perspective, which can be adopted into clinical practice. If effectively designed, implemented, and validated, cancer imaging AI-supported tools will have the potential to revolutionize the field of precision medicine in oncology.
Collapse
Affiliation(s)
- Luis Marti-Bonmati
- Radiology Department and Biomedical Imaging Research Group (GIBI230), La Fe Polytechnics and University Hospital and Health Research Institute, Valencia, Spain.
| | - Dow-Mu Koh
- Department of Radiology, Royal Marsden Hospital and Division of Radiotherapy and Imaging, Institute of Cancer Research, London, UK.,Department of Radiology, The Royal Marsden NHS Trust, London, UK
| | - Katrine Riklund
- Department of Radiation Sciences, Diagnostic Radiology, Umeå University, 901 85, Umeå, Sweden
| | - Maciej Bobowicz
- 2nd Department of Radiology, Medical University of Gdansk, 17 Smoluchowskiego Str, 80-214, Gdansk, Poland
| | - Yiannis Roussakis
- Department of Medical Physics, German Oncology Center, 4108, Limassol, Cyprus
| | - Joan C Vilanova
- Department of Radiology, Clínica Girona, Institute of Diagnostic Imaging (IDI)-Girona, Faculty of Medicine, University of Girona, Girona, Spain
| | - Jurgen J Fütterer
- Department of Radiology and Nuclear Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jordi Rimola
- CIBERehd, Barcelona Clinic Liver Cancer (BCLC) Group, Department of Radiology, Hospital Clínic, University of Barcelona, Barcelona, Spain
| | - Pedro Mallol
- Radiology Department and Biomedical Imaging Research Group (GIBI230), La Fe Polytechnics and University Hospital and Health Research Institute, Valencia, Spain
| | - Gloria Ribas
- Radiology Department and Biomedical Imaging Research Group (GIBI230), La Fe Polytechnics and University Hospital and Health Research Institute, Valencia, Spain
| | - Ana Miguel
- Radiology Department and Biomedical Imaging Research Group (GIBI230), La Fe Polytechnics and University Hospital and Health Research Institute, Valencia, Spain
| | - Manolis Tsiknakis
- Foundation for Research and Technology Hellas, Institute of Computer Science, Computational Biomedicine Lab (CBML), FORTH-ICS Heraklion, Crete, Greece
| | - Karim Lekadir
- Departament de Matemàtiques and Informàtica, Artificial Intelligence in Medicine Lab (BCN-AIM), Universitat de Barcelona, Barcelona, Spain
| | - Gianna Tsakou
- Maggioli S.P.A., Research and Development Lab, Athens, Greece
| |
Collapse
|
14
|
Bashyam VM, Doshi J, Erus G, Srinivasan D, Abdulkadir A, Habes M, Fan Y, Masters CL, Maruff P, Zhuo C, Völzke H, Johnson SC, Fripp J, Koutsouleris N, Satterthwaite TD, Wolf DH, Gur RE, Gur RC, Morris JC, Albert MS, Grabe HJ, Resnick SM, Bryan RN, Wittfeld K, Bülow R, Wolk DA, Shou H, Nasrallah IM, Davatzikos C. Deep Generative Medical Image Harmonization for Improving Cross-Site Generalization in Deep Learning Predictors. J Magn Reson Imaging 2022; 55:908-916. [PMID: 34564904 PMCID: PMC8844038 DOI: 10.1002/jmri.27908] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 08/22/2021] [Accepted: 08/23/2021] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND In the medical imaging domain, deep learning-based methods have yet to see widespread clinical adoption, in part due to limited generalization performance across different imaging devices and acquisition protocols. The deviation between estimated brain age and biological age is an established biomarker of brain health and such models may benefit from increased cross-site generalizability. PURPOSE To develop and evaluate a deep learning-based image harmonization method to improve cross-site generalizability of deep learning age prediction. STUDY TYPE Retrospective. POPULATION Eight thousand eight hundred and seventy-six subjects from six sites. Harmonization models were trained using all subjects. Age prediction models were trained using 2739 subjects from a single site and tested using the remaining 6137 subjects from various other sites. FIELD STRENGTH/SEQUENCE Brain imaging with magnetization prepared rapid acquisition with gradient echo or spoiled gradient echo sequences at 1.5 T and 3 T. ASSESSMENT StarGAN v2, was used to perform a canonical mapping from diverse datasets to a reference domain to reduce site-based variation while preserving semantic information. Generalization performance of deep learning age prediction was evaluated using harmonized, histogram matched, and unharmonized data. STATISTICAL TESTS Mean absolute error (MAE) and Pearson correlation between estimated age and biological age quantified the performance of the age prediction model. RESULTS Our results indicated a substantial improvement in age prediction in out-of-sample data, with the overall MAE improving from 15.81 (±0.21) years to 11.86 (±0.11) with histogram matching to 7.21 (±0.22) years with generative adversarial network (GAN)-based harmonization. In the multisite case, across the 5 out-of-sample sites, MAE improved from 9.78 (±6.69) years to 7.74 (±3.03) years with histogram normalization to 5.32 (±4.07) years with GAN-based harmonization. DATA CONCLUSION While further research is needed, GAN-based medical image harmonization appears to be a promising tool for improving cross-site deep learning generalization. LEVEL OF EVIDENCE 4 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Vishnu M. Bashyam
- Artificial Intelligence in Biomedical Imaging Lab, University of Pennsylvania, Philadelphia, PA, USA
| | - Jimit Doshi
- Artificial Intelligence in Biomedical Imaging Lab, University of Pennsylvania, Philadelphia, PA, USA
| | - Guray Erus
- Artificial Intelligence in Biomedical Imaging Lab, University of Pennsylvania, Philadelphia, PA, USA
| | - Dhivya Srinivasan
- Artificial Intelligence in Biomedical Imaging Lab, University of Pennsylvania, Philadelphia, PA, USA
| | - Ahmed Abdulkadir
- Artificial Intelligence in Biomedical Imaging Lab, University of Pennsylvania, Philadelphia, PA, USA
| | - Mohamad Habes
- Biggs Alzheimer’s Institute, University of Texas San Antonio Health Science Center, USA
| | - Yong Fan
- Artificial Intelligence in Biomedical Imaging Lab, University of Pennsylvania, Philadelphia, PA, USA
| | - Colin L. Masters
- Florey Institute of Neuroscience and Mental Health, University of Melbourne
| | - Paul Maruff
- Florey Institute of Neuroscience and Mental Health, University of Melbourne
| | - Chuanjun Zhuo
- Tianjin Mental Health Center, Nankai University Affiliated Tianjin Anding Hospital, Tianjin, China
- Department of Psychiatry, Tianjin Medical University, Tianjin, China
| | - Henry Völzke
- Institute for Community Medicine, University Medicine Greifswald, Germany
- German Centre for Cardiovascular Research, Partner Site Greifswald, Germany
| | - Sterling C. Johnson
- Wisconsin Alzheimer’s Institute, University of Wisconsin School of Medicine and Public Health
| | - Jurgen Fripp
- CSIRO Health and Biosecurity, Australian e-Health Research Centre CSIRO
| | | | - Theodore D. Satterthwaite
- Artificial Intelligence in Biomedical Imaging Lab, University of Pennsylvania, Philadelphia, PA, USA
- Department of Psychiatry, University of Pennsylvania
| | | | - Raquel E. Gur
- Department of Psychiatry, University of Pennsylvania
- Department of Radiology, University of Pennsylvania
| | - Ruben C. Gur
- Department of Psychiatry, University of Pennsylvania
- Department of Radiology, University of Pennsylvania
| | - John C. Morris
- Department of Neurology, Washington University in St. Louis
| | - Marilyn S. Albert
- Department of Neurology, Johns Hopkins University School of Medicine
| | - Hans J. Grabe
- Department of Psychiatry and Psychotherapy, University Medicine Greifswald, Germany
- German Center for Neurodegenerative Diseases (DZNE), Site Rostock/Greifswald, Germany
| | - Susan M. Resnick
- Laboratory of Behavioral Neuroscience, National Institute on Aging
| | - R. Nick Bryan
- Department of Diagnostic Medicine, University of Texas at Austin
| | - Katharina Wittfeld
- Department of Psychiatry and Psychotherapy, University Medicine Greifswald, Germany
- German Center for Neurodegenerative Diseases (DZNE), Site Rostock/Greifswald, Germany
| | - Robin Bülow
- Institute of Diagnostic Radiology and Neuroradiology, University Medicine Greifswald, Germany
| | | | - Haochang Shou
- Department of Biostatistics, Epidemiology and Informatics, University of Pennsylvania
| | | | - Christos Davatzikos
- Artificial Intelligence in Biomedical Imaging Lab, University of Pennsylvania, Philadelphia, PA, USA
| |
Collapse
|
15
|
Bonmatí LM, Miguel A, Suárez A, Aznar M, Beregi JP, Fournier L, Neri E, Laghi A, França M, Sardanelli F, Penzkofer T, Lambin P, Blanquer I, Menzel M, Seymour K, Figueiras S, Krischak K, Martínez R, Mirsky Y, Yang G, Alberich-Bayarri Á. CHAIMELEON Project: Creation of a Pan-European Repository of Health Imaging Data for the Development of AI-Powered Cancer Management Tools. Front Oncol 2022; 12:742701. [PMID: 35280732 PMCID: PMC8913333 DOI: 10.3389/fonc.2022.742701] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 01/28/2022] [Indexed: 12/13/2022] Open
Abstract
The CHAIMELEON project aims to set up a pan-European repository of health imaging data, tools and methodologies, with the ambition to set a standard and provide resources for future AI experimentation for cancer management. The project is a 4 year long, EU-funded project tackling some of the most ambitious research in the fields of biomedical imaging, artificial intelligence and cancer treatment, addressing the four types of cancer that currently have the highest prevalence worldwide: lung, breast, prostate and colorectal. To allow this, clinical partners and external collaborators will populate the repository with multimodality (MR, CT, PET/CT) imaging and related clinical data. Subsequently, AI developers will enable a multimodal analytical data engine facilitating the interpretation, extraction and exploitation of the information stored at the repository. The development and implementation of AI-powered pipelines will enable advancement towards automating data deidentification, curation, annotation, integrity securing and image harmonization. By the end of the project, the usability and performance of the repository as a tool fostering AI experimentation will be technically validated, including a validation subphase by world-class European AI developers, participating in Open Challenges to the AI Community. Upon successful validation of the repository, a set of selected AI tools will undergo early in-silico validation in observational clinical studies coordinated by leading experts in the partner hospitals. Tool performance will be assessed, including external independent validation on hallmark clinical decisions in response to some of the currently most important clinical end points in cancer. The project brings together a consortium of 18 European partners including hospitals, universities, R&D centers and private research companies, constituting an ecosystem of infrastructures, biobanks, AI/in-silico experimentation and cloud computing technologies in oncology.
Collapse
Affiliation(s)
- Luis Martí Bonmatí
- Medical Imaging Department, La Fe University and Polytechnic Hospital & Biomedical Imaging Research Group Grupo de Investigación Biomédica en Imagen (GIBI2) at La Fe University and Polytechnic Hospital and Health Research Institute, Valencia, Spain,*Correspondence: Luis Martí Bonmatí,
| | - Ana Miguel
- Medical Imaging Department, La Fe University and Polytechnic Hospital & Biomedical Imaging Research Group Grupo de Investigación Biomédica en Imagen (GIBI2) at La Fe University and Polytechnic Hospital and Health Research Institute, Valencia, Spain
| | | | | | | | - Laure Fournier
- Collège des enseignants en radiologie de France, Paris, France
| | - Emanuele Neri
- Diagnostic Radiology 3, Department of Translational Research, University of Pisa, Pisa, Italy
| | - Andrea Laghi
- Medicina Traslazionale e Oncologia, Sant Andrea Sapienza Rome, Rome, Italy
| | - Manuela França
- Department of Radiology, Centro Hospitalar Universitário do Porto, Porto, Portugal
| | - Francesco Sardanelli
- Servizio di Diagnostica per Immagini, “Istituto di Ricovero e Cura a Carattere Scientifico (IRCCS) Policlinico San Donato, Milanese, Italy
| | - Tobias Penzkofer
- Department of Radiology, CHARITÉ-Universitätsmedizin Berlin, Berlin, Germany
| | - Phillipe Lambin
- Department of Precision Medicine, Maastricht University, Maastricht, Netherlands
| | - Ignacio Blanquer
- Computing Science Department, Universitat Politècnica de València, València, Spain
| | - Marion I. Menzel
- GE Healthcare, München, Germany,Department of Physics, Technical University of Munich, Garching, Germany
| | | | | | - Katharina Krischak
- European Institute for Biomedical Imaging Research, EIBIR gemeinnützige GmbH, Vienna, Austria
| | - Ricard Martínez
- Departamento de Derecho Constitucional, Ciencia Política y Administración, Universitat de València, València, Spain
| | - Yisroel Mirsky
- Software and Information Systems Engineering, Ben-Gurion University of the Negev, Beer Sheva, Israel
| | - Guang Yang
- National Heart and Lung Institute, Imperial College London, London, United Kingdom
| | | |
Collapse
|
16
|
Wu G, Chen X, Shi Z, Zhang D, Hu Z, Mao Y, Wang Y, Yu J. Convolutional neural network with coarse-to-fine resolution fusion and residual learning structures for cross-modality image synthesis. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
17
|
Li Y, Chen J, Wei D, Zhu Y, Wu J, Xiong J, Gang Y, Sun W, Xu H, Qian T, Ma K, Zheng Y. Mix-and-Interpolate: A Training Strategy to Deal With Source-Biased Medical Data. IEEE J Biomed Health Inform 2022; 26:172-182. [PMID: 34637384 PMCID: PMC8908883 DOI: 10.1109/jbhi.2021.3119325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 08/27/2021] [Accepted: 10/05/2021] [Indexed: 12/04/2022]
Abstract
Till March 31st, 2021, the coronavirus disease 2019 (COVID-19) had reportedly infected more than 127 million people and caused over 2.5 million deaths worldwide. Timely diagnosis of COVID-19 is crucial for management of individual patients as well as containment of the highly contagious disease. Having realized the clinical value of non-contrast chest computed tomography (CT) for diagnosis of COVID-19, deep learning (DL) based automated methods have been proposed to aid the radiologists in reading the huge quantities of CT exams as a result of the pandemic. In this work, we address an overlooked problem for training deep convolutional neural networks for COVID-19 classification using real-world multi-source data, namely, the data source bias problem. The data source bias problem refers to the situation in which certain sources of data comprise only a single class of data, and training with such source-biased data may make the DL models learn to distinguish data sources instead of COVID-19. To overcome this problem, we propose MIx-aNd-Interpolate (MINI), a conceptually simple, easy-to-implement, efficient yet effective training strategy. The proposed MINI approach generates volumes of the absent class by combining the samples collected from different hospitals, which enlarges the sample space of the original source-biased dataset. Experimental results on a large collection of real patient data (1,221 COVID-19 and 1,520 negative CT images, and the latter consisting of 786 community acquired pneumonia and 734 non-pneumonia) from eight hospitals and health institutions show that: 1) MINI can improve COVID-19 classification performance upon the baseline (which does not deal with the source bias), and 2) MINI is superior to competing methods in terms of the extent of improvement.
Collapse
Affiliation(s)
| | | | - Dong Wei
- Tencent Jarvis LabShenzhen518000China
| | | | | | | | - Yadong Gang
- Department of RadiologyZhongnan Hospital of Wuhan UniversityWuhan430071China
| | - Wenbo Sun
- Department of RadiologyZhongnan Hospital of Wuhan UniversityWuhan430071China
| | - Haibo Xu
- Department of RadiologyZhongnan Hospital of Wuhan UniversityWuhan430071China
| | | | - Kai Ma
- Tencent Jarvis LabShenzhen518000China
| | | |
Collapse
|
18
|
Chen AA, Beer JC, Tustison NJ, Cook PA, Shinohara RT, Shou H. Mitigating site effects in covariance for machine learning in neuroimaging data. Hum Brain Mapp 2021; 43:1179-1195. [PMID: 34904312 PMCID: PMC8837590 DOI: 10.1002/hbm.25688] [Citation(s) in RCA: 63] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Revised: 09/16/2021] [Accepted: 10/03/2021] [Indexed: 12/29/2022] Open
Abstract
To acquire larger samples for answering complex questions in neuroscience, researchers have increasingly turned to multi‐site neuroimaging studies. However, these studies are hindered by differences in images acquired across multiple sites. These effects have been shown to bias comparison between sites, mask biologically meaningful associations, and even introduce spurious associations. To address this, the field has focused on harmonizing data by removing site‐related effects in the mean and variance of measurements. Contemporaneously with the increase in popularity of multi‐center imaging, the use of machine learning (ML) in neuroimaging has also become commonplace. These approaches have been shown to provide improved sensitivity, specificity, and power due to their modeling the joint relationship across measurements in the brain. In this work, we demonstrate that methods for removing site effects in mean and variance may not be sufficient for ML. This stems from the fact that such methods fail to address how correlations between measurements can vary across sites. Data from the Alzheimer's Disease Neuroimaging Initiative is used to show that considerable differences in covariance exist across sites and that popular harmonization techniques do not address this issue. We then propose a novel harmonization method called Correcting Covariance Batch Effects (CovBat) that removes site effects in mean, variance, and covariance. We apply CovBat and show that within‐site correlation matrices are successfully harmonized. Furthermore, we find that ML methods are unable to distinguish scanner manufacturer after our proposed harmonization is applied, and that the CovBat‐harmonized data retain accurate prediction of disease group.
Collapse
Affiliation(s)
- Andrew A Chen
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Joanne C Beer
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Nicholas J Tustison
- Department of Radiology and Medical Imaging, University of Virginia, Charlottesville, Virginia, USA
| | - Philip A Cook
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Russell T Shinohara
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Haochang Shou
- Penn Statistics in Imaging and Visualization Center, Department of Biostatistics, Epidemiology, and Informatics, University of Pennsylvania, Philadelphia, Pennsylvania, USA.,Center for Biomedical Image Computing and Analytics, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | | |
Collapse
|
19
|
Liu C, Qiao M, Jiang F, Guo Y, Jin Z, Wang Y. TN-USMA Net: Triple normalization-based gastrointestinal stromal tumors classification on multicenter EUS images with ultrasound-specific pretraining and meta attention. Med Phys 2021; 48:7199-7214. [PMID: 34412155 DOI: 10.1002/mp.15172] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 07/11/2021] [Accepted: 07/31/2021] [Indexed: 12/16/2022] Open
Abstract
PURPOSE Accurate quantification of gastrointestinal stromal tumors' (GISTs) risk stratification on multicenter endoscopic ultrasound (EUS) images plays a pivotal role in aiding the surgical decision-making process. This study focuses on automatically classifying higher-risk and lower-risk GISTs in the presence of a multicenter setting and limited data. METHODS In this study, we retrospectively enrolled 914 patients with GISTs (1824 EUS images in total) from 18 hospitals in China. We propose a triple normalization-based deep learning framework with ultrasound-specific pretraining and meta attention, namely, TN-USMA model. The triple normalization module consists of the intensity normalization, size normalization, and spatial resolution normalization. First, the image intensity is standardized and same-size regions of interest (ROIs) and same-resolution tumor masks are generated in parallel. Then, the transfer learning strategy is utilized to mitigate the data scarcity problem. The same-size ROIs are fed into a deep architecture with ultrasound-specific pretrained weights, which are obtained from self-supervised learning using a large volume of unlabeled ultrasound images. Meanwhile, tumors' size features are calculated from the same-resolution masks individually. Afterward, the size features together with two demographic features are integrated to the model before the final classification layer using a meta attention mechanism to further enhance feature representations. The diagnostic performance of the proposed method was compared with one radiomics-based method and two state-of-the-art deep learning methods. Four evaluation metrics, namely, the accuracy, the area under the receiver operator curve, the sensitivity, and the specificity were used to evaluate the model performance. RESULTS The proposed TN-USMA model achieves an overall accuracy of 0.834 (95% confidence interval [CI]: 0.772, 0.885), an area under the receiver operator curve of 0.881 (95% CI: 0.825, 0.924), a sensitivity of 0.844 (95% CI: 0.672, 0.947), and a specificity of 0.832 (95% CI: 0.762, 0.888). The AUC significantly outperforms other two deep learning approaches (p < 0.05, DeLong et al). Moreover, the performance is stable under different variations of multicenter dataset partitions. CONCLUSIONS The proposed TN-USMA model can successfully differentiate higher-risk GISTs from lower-risk ones. It is accurate, robust, generalizable, and efficient for potential clinical applications.
Collapse
Affiliation(s)
- Chengcheng Liu
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Mengyun Qiao
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Fei Jiang
- Department of Gastroenterology, Changhai Hospital, Shanghai, China
| | - Yi Guo
- Department of Electronic Engineering, Fudan University, Shanghai, China
| | - Zhendong Jin
- Department of Gastroenterology, Changhai Hospital, Shanghai, China
| | - Yuanyuan Wang
- Department of Electronic Engineering, Fudan University, Shanghai, China
| |
Collapse
|
20
|
Modanwal G, Vellal A, Mazurowski MA. Normalization of breast MRIs using cycle-consistent generative adversarial networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106225. [PMID: 34198016 DOI: 10.1016/j.cmpb.2021.106225] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 05/29/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVES Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI) is widely used to complement ultrasound examinations and x-ray mammography for early detection and diagnosis of breast cancer. However, images generated by various MRI scanners (e.g., GE Healthcare, and Siemens) differ both in intensity and noise distribution, preventing algorithms trained on MRIs from one scanner to generalize to data from other scanners. In this work, we propose a method to solve this problem by normalizing images between various scanners. METHODS MRI normalization is challenging because it requires normalizing intensity values and mapping noise distributions between scanners. We utilize a cycle-consistent generative adversarial network to learn a bidirectional mapping and perform normalization between MRIs produced by GE Healthcare and Siemens scanners in an unpaired setting. Initial experiments demonstrate that the traditional CycleGAN architecture struggles to preserve the anatomical structures of the breast during normalization. Thus, we propose two technical innovations in order to preserve both the shape of the breast as well as the tissue structures within the breast. First, we incorporate mutual information loss during training in order to ensure anatomical consistency. Second, we propose a modified discriminator architecture that utilizes a smaller field-of-view to ensure the preservation of finer details in the breast tissue. RESULTS Quantitative and qualitative evaluations show that the second innovation consistently preserves the breast shape and tissue structures while also performing the proper intensity normalization and noise distribution mapping. CONCLUSION Our results demonstrate that the proposed model can successfully learn a bidirectional mapping and perform normalization between MRIs produced by different vendors, potentially enabling improved diagnosis and detection of breast cancer. All the data used in this study are publicly available at https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=70226903.
Collapse
Affiliation(s)
| | - Adithya Vellal
- Department of Computer Science, Duke University, Durham, NC, USA
| | | |
Collapse
|
21
|
Wang Q, Liu W, Chen X, Wang X, Chen G, Zhu X. Quantification of scar collagen texture and prediction of scar development via second harmonic generation images and a generative adversarial network. BIOMEDICAL OPTICS EXPRESS 2021; 12:5305-5319. [PMID: 34513258 PMCID: PMC8407811 DOI: 10.1364/boe.431096] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 05/29/2023]
Abstract
Widely used for medical analysis, the texture of the human scar tissue is characterized by irregular and extensive types. The quantitative detection and analysis of the scar texture as enabled by image analysis technology is of great significance to clinical practice. However, the existing methods remain disadvantaged by various shortcomings, such as the inability to fully extract the features of texture. Hence, the integration of second harmonic generation (SHG) imaging and deep learning algorithm is proposed in this study. Through combination with Tamura texture features, a regression model of the scar texture can be constructed to develop a novel method of computer-aided diagnosis, which can assist clinical diagnosis. Based on wavelet packet transform (WPT) and generative adversarial network (GAN), the model is trained with scar texture images of different ages. Generalized Boosted Regression Trees (GBRT) is also adopted to perform regression analysis. Then, the extracted features are further used to predict the age of scar. The experimental results obtained by our proposed model are better compared to the previously published methods. It thus contributes to the better understanding of the mechanism behind scar development and possibly the further development of SHG for skin analysis and clinic practice.
Collapse
Affiliation(s)
- Qing Wang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Weiping Liu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Xinghong Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Xiumei Wang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Guannan Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Xiaoqin Zhu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| |
Collapse
|
22
|
Hu Z, Zhuang Q, Xiao Y, Wu G, Shi Z, Chen L, Wang Y, Yu J. MIL normalization -- prerequisites for accurate MRI radiomics analysis. Comput Biol Med 2021; 133:104403. [PMID: 33932645 DOI: 10.1016/j.compbiomed.2021.104403] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2021] [Revised: 04/11/2021] [Accepted: 04/11/2021] [Indexed: 01/15/2023]
Abstract
The quality of magnetic resonance (MR) images obtained with different instruments and imaging parameters varies greatly. A large number of heterogeneous images are collected, and they suffer from acquisition variation. Such imaging quality differences will have a great impact on the radiomics analysis. The main differences in MR images include modality mismatch (M), intensity distribution variance (I), and layer-spacing differences (L), which are referred to as MIL differences in this paper for convenience. An MIL normalization system is proposed to reconstruct uneven MR images into high-quality data with complete modality, a uniform intensity distribution and consistent layer spacing. Three radiomics tasks, including tumor segmentation, pathological grading and genetic diagnosis of glioma, were used to verify the effect of MIL normalization on radiomics analysis. Three retrospective glioma datasets were analyzed in this study: BraTs (285 cases), TCGA (112 cases) and HuaShan (403 cases). They were used to test the effect of MIL on three different radiomics tasks, including tumor segmentation, pathological grading and genetic diagnosis. MIL normalization included three components: multimodal synthesis based on an encoder-decoder network, intensity normalization based on CycleGAN, and layer-spacing unification based on Statistical Parametric Mapping (SPM). The Dice similarity coefficient, areas under the curve (AUC) and six other indicators were calculated and compared after different normalization steps. The MIL normalization system can improved the Dice coefficient of segmentation by 9% (P < .001), the AUC of pathological grading by 32% (P < .001), and IDH1 status prediction by 25% (P < .001) when compared to non-normalization. The proposed MIL normalization system provides high-quality standardized data, which is a prerequisite for accurate radiomics analysis.
Collapse
Affiliation(s)
- Zhaoyu Hu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Qiyuan Zhuang
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Yang Xiao
- Department of Biomedical Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Guoqing Wu
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Zhifeng Shi
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Liang Chen
- Department of Neurosurgery, Huashan Hospital, Fudan University, Shanghai, China
| | - Yuanyuan Wang
- School of Information Science and Technology, Fudan University, Shanghai, China
| | - Jinhua Yu
- School of Information Science and Technology, Fudan University, Shanghai, China.
| |
Collapse
|
23
|
Jeong YJ, Park HS, Jeong JE, Yoon HJ, Jeon K, Cho K, Kang DY. Restoration of amyloid PET images obtained with short-time data using a generative adversarial networks framework. Sci Rep 2021; 11:4825. [PMID: 33649403 PMCID: PMC7921674 DOI: 10.1038/s41598-021-84358-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2020] [Accepted: 02/15/2021] [Indexed: 11/15/2022] Open
Abstract
Our purpose in this study is to evaluate the clinical feasibility of deep-learning techniques for F-18 florbetaben (FBB) positron emission tomography (PET) image reconstruction using data acquired in a short time. We reconstructed raw FBB PET data of 294 patients acquired for 20 and 2 min into standard-time scanning PET (PET20m) and short-time scanning PET (PET2m) images. We generated a standard-time scanning PET-like image (sPET20m) from a PET2m image using a deep-learning network. We did qualitative and quantitative analyses to assess whether the sPET20m images were available for clinical applications. In our internal validation, sPET20m images showed substantial improvement on all quality metrics compared with the PET2m images. There was a small mean difference between the standardized uptake value ratios of sPET20m and PET20m images. A Turing test showed that the physician could not distinguish well between generated PET images and real PET images. Three nuclear medicine physicians could interpret the generated PET image and showed high accuracy and agreement. We obtained similar quantitative results by means of temporal and external validations. We can generate interpretable PET images from low-quality PET images because of the short scanning time using deep-learning techniques. Although more clinical validation is needed, we confirmed the possibility that short-scanning protocols with a deep-learning technique can be used for clinical applications.
Collapse
Affiliation(s)
- Young Jin Jeong
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea.,Institute of Convergence Bio-Health, Dong-A University, Busan, Republic of Korea
| | - Hyoung Suk Park
- National Institute for Mathematical Science, Daejeon, Republic of Korea
| | - Ji Eun Jeong
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea
| | - Hyun Jin Yoon
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea
| | - Kiwan Jeon
- National Institute for Mathematical Science, Daejeon, Republic of Korea
| | - Kook Cho
- College of General Education, Dong-A University, Busan, Republic of Korea
| | - Do-Young Kang
- Department of Nuclear Medicine, Dong-A University Hospital, Dong-A University College of Medicine, 1, 3ga, Dongdaesin-dong, Seo-gu, Busan, 602-715, South Korea. .,Institute of Convergence Bio-Health, Dong-A University, Busan, Republic of Korea. .,Department of Translational Biomedical Sciences, Dong-A University, Busan, Republic of Korea.
| |
Collapse
|
24
|
Zheng R, Shi C, Wang C, Shi N, Qiu T, Chen W, Shi Y, Wang H. Imaging-Based Staging of Hepatic Fibrosis in Patients with Hepatitis B: A Dynamic Radiomics Model Based on Gd-EOB-DTPA-Enhanced MRI. Biomolecules 2021; 11:307. [PMID: 33670596 PMCID: PMC7922315 DOI: 10.3390/biom11020307] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 02/13/2021] [Accepted: 02/17/2021] [Indexed: 12/12/2022] Open
Abstract
Accurate grading of liver fibrosis can effectively assess the severity of liver disease and help doctors make an appropriate diagnosis. This study aimed to perform the automatic staging of hepatic fibrosis on patients with hepatitis B, who underwent gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced magnetic resonance imaging with dynamic radiomics analysis. The proposed dynamic radiomics model combined imaging features from multi-phase dynamic contrast-enhanced (DCE) images and time-domain information. Imaging features were extracted from the deep learning-based segmented liver volume, and time-domain features were further explored to analyze the variation in features during contrast enhancement. Model construction and evaluation were based on a 132-case data set. The proposed model achieved remarkable performance in significant fibrosis (fibrosis stage S1 vs. S2-S4; accuracy (ACC) = 0.875, area under the curve (AUC) = 0.867), advanced fibrosis (S1-S2 vs. S3-S4; ACC = 0.825, AUC = 0.874), and cirrhosis (S1-S3 vs. S4; ACC = 0.850, AUC = 0.900) classifications in the test set. It was more dominant compared with the conventional single-phase or multi-phase DCE-based radiomics models, normalized liver enhancement, and some serological indicators. Time-domain features were found to play an important role in the classification models. The dynamic radiomics model can be applied for highly accurate automatic hepatic fibrosis staging.
Collapse
Affiliation(s)
- Rencheng Zheng
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China;
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai 200433, China
| | - Chunzi Shi
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai 201052, China; (C.S.); (N.S.); (T.Q.)
| | - Chengyan Wang
- Human Phenome Institute, Fudan University, Shanghai 200433, China;
| | - Nannan Shi
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai 201052, China; (C.S.); (N.S.); (T.Q.)
| | - Tian Qiu
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai 201052, China; (C.S.); (N.S.); (T.Q.)
| | - Weibo Chen
- Market Solutions Center, Philips Healthcare, Shanghai 200072, China;
| | - Yuxin Shi
- Department of Radiology, Shanghai Public Health Clinical Center, Fudan University, Shanghai 201052, China; (C.S.); (N.S.); (T.Q.)
| | - He Wang
- Institute of Science and Technology for Brain-Inspired Intelligence, Fudan University, Shanghai 200433, China;
- Key Laboratory of Computational Neuroscience and Brain-Inspired Intelligence (Fudan University), Ministry of Education, Shanghai 200433, China
- Human Phenome Institute, Fudan University, Shanghai 200433, China;
| |
Collapse
|
25
|
Deep learning-based solvability of underdetermined inverse problems in medical imaging. Med Image Anal 2021; 69:101967. [PMID: 33517242 DOI: 10.1016/j.media.2021.101967] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 12/28/2020] [Accepted: 01/06/2021] [Indexed: 11/23/2022]
Abstract
Recently, with the significant developments in deep learning techniques, solving underdetermined inverse problems has become one of the major concerns in the medical imaging domain, where underdetermined problems are motivated by the willingness to provide high resolution medical images with as little data as possible, by optimizing data collection in terms of minimal acquisition time, cost-effectiveness, and low invasiveness. Typical examples include undersampled magnetic resonance imaging(MRI), interior tomography, and sparse-view computed tomography(CT), where deep learning techniques have achieved excellent performances. However, there is a lack of mathematical analysis of why the deep learning method is performing well. This study aims to explain about learning the causal relationship regarding the structure of the training data suitable for deep learning, to solve highly underdetermined problems. We present a particular low-dimensional solution model to highlight the advantage of deep learning methods over conventional methods, where two approaches use the prior information of the solution in a completely different way. We also analyze whether deep learning methods can learn the desired reconstruction map from training data in the three models (undersampled MRI, sparse-view CT, interior tomography). This paper also discusses the nonlinearity structure of underdetermined linear systems and conditions of learning (called M-RIP condition).
Collapse
|
26
|
Elazab A, Wang C, Gardezi SJS, Bai H, Hu Q, Wang T, Chang C, Lei B. GP-GAN: Brain tumor growth prediction using stacked 3D generative adversarial networks from longitudinal MR Images. Neural Netw 2020; 132:321-332. [DOI: 10.1016/j.neunet.2020.09.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2019] [Revised: 08/27/2020] [Accepted: 09/06/2020] [Indexed: 01/28/2023]
|
27
|
Wang G, Gong E, Banerjee S, Martin D, Tong E, Choi J, Chen H, Wintermark M, Pauly JM, Zaharchuk G. Synthesize High-Quality Multi-Contrast Magnetic Resonance Imaging From Multi-Echo Acquisition Using Multi-Task Deep Generative Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3089-3099. [PMID: 32286966 DOI: 10.1109/tmi.2020.2987026] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Multi-echo saturation recovery sequence can provide redundant information to synthesize multi-contrast magnetic resonance imaging. Traditional synthesis methods, such as GE's MAGiC platform, employ a model-fitting approach to generate parameter-weighted contrasts. However, models' over-simplification, as well as imperfections in the acquisition, can lead to undesirable reconstruction artifacts, especially in T2-FLAIR contrast. To improve the image quality, in this study, a multi-task deep learning model is developed to synthesize multi-contrast neuroimaging jointly using both signal relaxation relationships and spatial information. Compared with previous deep learning-based synthesis, the correlation between different destination contrast is utilized to enhance reconstruction quality. To improve model generalizability and evaluate clinical significance, the proposed model was trained and tested on a large multi-center dataset, including healthy subjects and patients with pathology. Results from both quantitative comparison and clinical reader study demonstrate that the multi-task formulation leads to more efficient and accurate contrast synthesis than previous methods.
Collapse
|
28
|
Ali MB, Gu IYH, Berger MS, Pallud J, Southwell D, Widhalm G, Roux A, Vecchio TG, Jakola AS. Domain Mapping and Deep Learning from Multiple MRI Clinical Datasets for Prediction of Molecular Subtypes in Low Grade Gliomas. Brain Sci 2020; 10:E463. [PMID: 32708419 PMCID: PMC7408150 DOI: 10.3390/brainsci10070463] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2020] [Revised: 07/07/2020] [Accepted: 07/15/2020] [Indexed: 01/17/2023] Open
Abstract
Brain tumors, such as low grade gliomas (LGG), are molecularly classified which require the surgical collection of tissue samples. The pre-surgical or non-operative identification of LGG molecular type could improve patient counseling and treatment decisions. However, radiographic approaches to LGG molecular classification are currently lacking, as clinicians are unable to reliably predict LGG molecular type using magnetic resonance imaging (MRI) studies. Machine learning approaches may improve the prediction of LGG molecular classification through MRI, however, the development of these techniques requires large annotated data sets. Merging clinical data from different hospitals to increase case numbers is needed, but the use of different scanners and settings can affect the results and simply combining them into a large dataset often have a significant negative impact on performance. This calls for efficient domain adaption methods. Despite some previous studies on domain adaptations, mapping MR images from different datasets to a common domain without affecting subtitle molecular-biomarker information has not been reported yet. In this paper, we propose an effective domain adaptation method based on Cycle Generative Adversarial Network (CycleGAN). The dataset is further enlarged by augmenting more MRIs using another GAN approach. Further, to tackle the issue of brain tumor segmentation that requires time and anatomical expertise to put exact boundary around the tumor, we have used a tight bounding box as a strategy. Finally, an efficient deep feature learning method, multi-stream convolutional autoencoder (CAE) and feature fusion, is proposed for the prediction of molecular subtypes (1p/19q-codeletion and IDH mutation). The experiments were conducted on a total of 161 patients consisting of FLAIR and T1 weighted with contrast enhanced (T1ce) MRIs from two different institutions in the USA and France. The proposed scheme is shown to achieve the test accuracy of 74 . 81 % on 1p/19q codeletion and 81 . 19 % on IDH mutation, with marked improvement over the results obtained without domain mapping. This approach is also shown to have comparable performance to several state-of-the-art methods.
Collapse
Affiliation(s)
- Muhaddisa Barat Ali
- Department of Electrical Engineering, Chalmers University of Technology, 41296 Gothenburg, Sweden; (M.B.A.); (I.Y.-H.G.)
| | - Irene Yu-Hua Gu
- Department of Electrical Engineering, Chalmers University of Technology, 41296 Gothenburg, Sweden; (M.B.A.); (I.Y.-H.G.)
| | - Mitchel S. Berger
- Department of Neurological Surgery, University of California San Fransisco, San Francisco, CA 94143-0112, USA; (M.S.B.); (D.S.)
| | - Johan Pallud
- Department of Neurosurgery, GHU Paris—Sainte-Anne Hospital, University of Paris, F-75014 Paris, France; (J.P.); (A.R.)
| | - Derek Southwell
- Department of Neurological Surgery, University of California San Fransisco, San Francisco, CA 94143-0112, USA; (M.S.B.); (D.S.)
| | - Georg Widhalm
- Department of Neurosurgery, University Hospital of Vienna, 1090 Vienna, Austria;
| | - Alexandre Roux
- Department of Neurosurgery, GHU Paris—Sainte-Anne Hospital, University of Paris, F-75014 Paris, France; (J.P.); (A.R.)
| | - Tomás Gomez Vecchio
- Department of Clinical Neurosciences, Institution of Neuroscience and Physiology, Sahlgrenska Academy, 41345 Gothenburg, Sweden;
| | - Asgeir Store Jakola
- Department of Clinical Neurosciences, Institution of Neuroscience and Physiology, Sahlgrenska Academy, 41345 Gothenburg, Sweden;
| |
Collapse
|
29
|
Zhou Z, Wang Y, Guo Y, Jiang X, Qi Y. Ultrafast Plane Wave Imaging With Line-Scan-Quality Using an Ultrasound-Transfer Generative Adversarial Network. IEEE J Biomed Health Inform 2020; 24:943-956. [DOI: 10.1109/jbhi.2019.2950334] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|