1
|
Loizillon S, Bottani S, Maire A, Ströer S, Chougar L, Dormont D, Colliot O, Burgos N. Automatic quality control of brain 3D FLAIR MRIs for a clinical data warehouse. Med Image Anal 2025; 103:103617. [PMID: 40344945 DOI: 10.1016/j.media.2025.103617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 04/03/2025] [Accepted: 04/18/2025] [Indexed: 05/11/2025]
Abstract
Clinical data warehouses, which have arisen over the last decade, bring together the medical data of millions of patients and offer the potential to train and validate machine learning models in real-world scenarios. The quality of MRIs collected in clinical data warehouses differs significantly from that generally observed in research datasets, reflecting the variability inherent to clinical practice. Consequently, the use of clinical data requires the implementation of robust quality control tools. By using a substantial number of pre-existing manually labelled T1-weighted MR images (5,500) alongside a smaller set of newly labelled FLAIR images (926), we present a novel semi-supervised adversarial domain adaptation architecture designed to exploit shared representations between MRI sequences thanks to a shared feature extractor, while taking into account the specificities of the FLAIR thanks to a specific classification head for each sequence. This architecture thus consists of a common invariant feature extractor, a domain classifier and two classification heads specific to the source and target, all designed to effectively deal with potential class distribution shifts between the source and target data classes. The primary objectives of this paper were: (1) to identify images which are not proper 3D FLAIR brain MRIs; (2) to rate the overall image quality. For the first objective, our approach demonstrated excellent results, with a balanced accuracy of 89%, comparable to that of human raters. For the second objective, our approach achieved good performance, although lower than that of human raters. Nevertheless, the automatic approach accurately identified bad quality images (balanced accuracy >79%). In conclusion, our proposed approach overcomes the initial barrier of heterogeneous image quality in clinical data warehouses, thereby facilitating the development of new research using clinical routine 3D FLAIR brain images.
Collapse
Affiliation(s)
- Sophie Loizillon
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris 75013, France
| | - Simona Bottani
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris 75013, France
| | - Aurélien Maire
- AP-HP, Innovation & Données - Département des Services Numériques, Paris 75012, France
| | - Sebastian Ströer
- AP-HP, Hôpital de la Pitié Salpêtrière, Department of Neuroradiology, Paris 75013, France
| | - Lydia Chougar
- AP-HP, Hôpital de la Pitié Salpêtrière, Department of Neuroradiology, Paris 75013, France
| | - Didier Dormont
- AP-HP, Hôpital de la Pitié Salpêtrière, Department of Neuroradiology, Paris 75013, France; Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, DMU DIAMENT, Paris 75013, France
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris 75013, France
| | - Ninon Burgos
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris 75013, France.
| |
Collapse
|
2
|
Barba T, Robert M, Hot A. [Artificial intelligence in healthcare: A survival guide for internists]. Rev Med Interne 2025:S0248-8663(25)00047-5. [PMID: 39984315 DOI: 10.1016/j.revmed.2025.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2024] [Revised: 02/04/2025] [Accepted: 02/09/2025] [Indexed: 02/23/2025]
Abstract
Artificial intelligence (AI) is experiencing considerable growth in medicine, driven by the explosion of available biomedical data and the emergence of new algorithmic architectures. Applications are rapidly multiplying, from diagnostic assistance to disease progression prediction, paving the way for more personalized medicine. The recent advent of large language models, such as ChatGPT, has particularly interested the medical community, thanks to their ease of use, but also raised questions about their reliability in medical contexts. This review presents the fundamental concepts of medical AI, specifically distinguishing traditional discriminative approaches from new generative models. We detail the different exploitable data sources and methodological pitfalls to avoid during the development of these tools. Finally, we address the practical and ethical implications of this technological revolution, emphasizing the importance of the medical community's appropriation of these tools.
Collapse
Affiliation(s)
- Thomas Barba
- Service de médecine interne, hôpital Édouard-Herriot, 5, place d'Arsonval, 69003 Lyon, France.
| | - Marie Robert
- Service de médecine interne, hôpital Édouard-Herriot, 5, place d'Arsonval, 69003 Lyon, France
| | - Arnaud Hot
- Service de médecine interne, hôpital Édouard-Herriot, 5, place d'Arsonval, 69003 Lyon, France
| |
Collapse
|
3
|
Suc G, Dewavrin T, Mesnier J, Brochet E, Sallah K, Dupont A, Ou P, Para M, Arangalage D, Urena M, Iung B. Cardiac magnetic resonance imaging-derived right ventricular volume and function, and association with outcomes in isolated tricuspid regurgitation. Arch Cardiovasc Dis 2025; 118:43-51. [PMID: 39489659 DOI: 10.1016/j.acvd.2024.09.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/09/2024] [Revised: 09/08/2024] [Accepted: 09/16/2024] [Indexed: 11/05/2024]
Abstract
BACKGROUND In patients with significant tricuspid regurgitation, cardiac magnetic resonance imaging (CMR) is the preferred method for the evaluation of right ventricular function and volumes. However validated thresholds are lacking. AIM The aim of this study was to evaluate CMR assessment of right ventricular volumes in patients with significant (moderate or severe) tricuspid regurgitation, and to define its association with outcomes. METHODS The PRONOVAL study is a retrospective multicentre study using the clinical data warehouse of Greater Paris University Hospitals (AP-HP). Patients were screened for CMR in the PMSI (Programme de médicalisation des systèmes d'information). Hospitalization reports were analysed by natural language processing to include patients with tricuspid regurgitation. Exclusion criteria were left heart valvular disease, heart transplantation and cardiac amyloidosis. Primary outcome was a combined criterion of death or tricuspid surgery. RESULTS Between September 2017 and September 2021, 151 patients with isolated tricuspid regurgitation were screened. Right ventricular function and volumes were available in 86 (57.0%) CMR reports (the complete CMR group). In the complete CMR group, tricuspid regurgitation was severe in 62 patients (72.1%). Median age was 67.0 years (interquartile range 58.0-75.8). Median right ventricular indexed end-diastolic volume was 98.0 mL/m2 (interquartile range 66.8-118.5). At 2-year follow-up, six patients (9.2%) had undergone tricuspid valve surgery, and 12 patients (18.5%) had died. Right ventricular indexed end-diastolic volume was associated with death or surgery at 2years, with an area under the receiver operating characteristic curve of 0.76 (95% confidence interval 0.75-0.77) for a threshold of 119mL/m2. CONCLUSION Right ventricular indexed end-diastolic volume >119mL/m2 was found to be an independent indicator of death or surgery in patients with significant tricuspid regurgitation.
Collapse
Affiliation(s)
- Gaspard Suc
- Cardiology, Bichat Hospital, AP-HP, 75018 Paris, France; UMRS 1148, Inserm, 75018 Paris, France; Université Paris Cité, 75006 Paris, France.
| | - Thibault Dewavrin
- Department of Epidemiology, Biostatistics and Clinical Research, Bichat Hospital, AP-HP, 75018 Paris, France
| | - Jules Mesnier
- Cardiology, Bichat Hospital, AP-HP, 75018 Paris, France; UMRS 1148, Inserm, 75018 Paris, France; Université Paris Cité, 75006 Paris, France
| | - Eric Brochet
- Cardiology, Bichat Hospital, AP-HP, 75018 Paris, France; UMRS 1148, Inserm, 75018 Paris, France; Université Paris Cité, 75006 Paris, France
| | - Kankoe Sallah
- Department of Epidemiology, Biostatistics and Clinical Research, Bichat Hospital, AP-HP, 75018 Paris, France
| | - Axelle Dupont
- Department of Epidemiology, Biostatistics and Clinical Research, Bichat Hospital, AP-HP, 75018 Paris, France
| | - Phalla Ou
- Cardiology, Bichat Hospital, AP-HP, 75018 Paris, France; UMRS 1148, Inserm, 75018 Paris, France; Université Paris Cité, 75006 Paris, France
| | - Marylou Para
- UMRS 1148, Inserm, 75018 Paris, France; Université Paris Cité, 75006 Paris, France; Cardiac Surgery, Bichat Hospital, AP-HP, 75018 Paris, France
| | - Dimitri Arangalage
- Cardiology, Bichat Hospital, AP-HP, 75018 Paris, France; UMRS 1148, Inserm, 75018 Paris, France; Université Paris Cité, 75006 Paris, France
| | - Marina Urena
- Cardiology, Bichat Hospital, AP-HP, 75018 Paris, France; UMRS 1148, Inserm, 75018 Paris, France; Université Paris Cité, 75006 Paris, France
| | - Bernard Iung
- Cardiology, Bichat Hospital, AP-HP, 75018 Paris, France; UMRS 1148, Inserm, 75018 Paris, France; Université Paris Cité, 75006 Paris, France
| |
Collapse
|
4
|
Cheng-Zarate D, Burns J, Ngo C, Haryanto A, Duncan G, Taniar D, Wybrow M. Creating a data warehouse to support monitoring of NSQHS blood management standard from EMR data. BMC Med Inform Decis Mak 2024; 24:353. [PMID: 39574142 PMCID: PMC11583751 DOI: 10.1186/s12911-024-02732-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 10/22/2024] [Indexed: 11/24/2024] Open
Abstract
BACKGROUND Blood management is an important aspect of healthcare and vital for the well-being of patients. For effective blood management, it is essential to determine the quality and documentation of the processes for blood transfusions in the Electronic Medical Records (EMR) system. The EMR system stores information on most activities performed in a digital hospital. As such, it is difficult to get an overview of all data. The National Safety and Quality Health Service (NSQHS) Standards define metrics that assess the care quality of health entities such as hospitals. To produce these metrics, data needs to be analysed historically. However, data in the EMR is not designed to easily perform analytical queries of the kind which are needed to feed into clinical decision support tools. Thus, another system needs to be implemented to store and calculate the metrics for the blood management national standard. METHODS In this paper, we propose a clinical data warehouse that stores the transformed data from EMR to be able to identify that the hospital is compliant with the Australian NSQHS Standards for blood management. Firstly, the data needed was explored and evaluated. Next, a schema for the clinical data warehouse was designed for the efficient storage of EMR data. Once the schema was defined, data was extracted from the EMR to be preprocessed to fit the schema design. Finally, the data warehouse allows the data to be consumed by decision support tools. RESULTS We worked with Eastern Health, a major Australian health service, to implement the data warehouse that allowed us to easily query and supply data to be ingested by clinical decision support systems. Additionally, this implementation provides flexibility to recompute the metrics whenever data is updated. Finally, a dashboard was implemented to display important metrics defined by the National Safety and Quality Health Service (NSQHS) Standards on blood management. CONCLUSIONS This study prioritises streamlined data modeling and processing, in contrast to conventional dashboard-centric approaches. It ensures data readiness for decision-making tools, offering insights to clinicians and validating hospital compliance with national standards in blood management through efficient design.
Collapse
Affiliation(s)
- David Cheng-Zarate
- Faculty of Information Technology, Monash University, Melbourne, Australia.
| | | | - Cathy Ngo
- Eastern Health, Melbourne, Australia
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - Agnes Haryanto
- Faculty of Information Technology, Monash University, Melbourne, Australia
| | - Gregory Duncan
- Faculty of Medicine, Nursing and Health Sciences, Monash University, Melbourne, Australia
| | - David Taniar
- Faculty of Information Technology, Monash University, Melbourne, Australia
| | - Michael Wybrow
- Faculty of Information Technology, Monash University, Melbourne, Australia
| |
Collapse
|
5
|
Roecher E, Mösch L, Zweerings J, Thiele FO, Caspers S, Gaebler AJ, Eisner P, Sarkheil P, Mathiak K. Motion Artifact Detection for T1-Weighted Brain MR Images Using Convolutional Neural Networks. Int J Neural Syst 2024:2450052. [PMID: 38989919 DOI: 10.1142/s0129065724500527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2024]
Abstract
Quality assessment (QA) of magnetic resonance imaging (MRI) encompasses several factors such as noise, contrast, homogeneity, and imaging artifacts. Quality evaluation is often not standardized and relies on the expertise, and vigilance of the personnel, posing limitations especially with large datasets. Machine learning based on convolutional neural networks (CNNs) is a promising approach to address these challenges by performing automated inspection of MR images. In this study, a CNN for the detection of random head motion artifacts (RHM) in T1-weighted MRI as one aspect of image quality is proposed. A two-step approach aimed to first identify images exhibiting pronounced motion artifacts, and second to evaluate the feasibility of a more detailed three-class classification. The utilized dataset consisted of 420 T1-weighted whole-brain image volumes with isotropic resolution. Human experts assigned each volume to one of three classes of artifact prominence. Results demonstrate an accuracy of 95% for the identification of images with pronounced artifact load. The addition of an intermediate class retained an accuracy of 76%. The findings highlight the potential of CNN-based approaches to increase the efficiency of post-hoc QAs in large datasets by flagging images with potentially relevant artifact loads for closer inspection.
Collapse
Affiliation(s)
- Erik Roecher
- Department of Psychiatry, Psychotherapy and Psychosomatics, Faculty of Medicine, RWTH Aachen, Germany
| | - Lucas Mösch
- Department of Psychiatry, Psychotherapy and Psychosomatics, Faculty of Medicine, RWTH Aachen, Germany
| | - Jana Zweerings
- Department of Psychiatry, Psychotherapy and Psychosomatics, Faculty of Medicine, RWTH Aachen, Germany
| | | | - Svenja Caspers
- Institute for Anatomy I, Medical Faculty & University, Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Arnim Johannes Gaebler
- Department of Psychiatry, Psychotherapy and Psychosomatics, Faculty of Medicine, RWTH Aachen, Germany
- JARA-BRAIN, Jülich Aachen Research Alliance (JARA), Translational Brain Medicine, Germany
- Institute of Neurophysiology, Faculty of Medicine, RWTH Aachen, Germany
| | - Patrick Eisner
- Department of Psychiatry, Psychotherapy and Psychosomatics, Faculty of Medicine, RWTH Aachen, Germany
| | - Pegah Sarkheil
- Department of Psychiatry, Psychotherapy and Psychosomatics, Faculty of Medicine, RWTH Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Faculty of Medicine, RWTH Aachen, Germany
- JARA-BRAIN, Jülich Aachen Research Alliance (JARA), Translational Brain Medicine, Germany
| |
Collapse
|
6
|
Rouhi R, Niyoteka S, Carré A, Achkar S, Laurent PA, Ba MB, Veres C, Henry T, Vakalopoulou M, Sun R, Espenel S, Mrissa L, Laville A, Chargari C, Deutsch E, Robert C. Automatic gross tumor volume segmentation with failure detection for safe implementation in locally advanced cervical cancer. Phys Imaging Radiat Oncol 2024; 30:100578. [PMID: 38912007 PMCID: PMC11192799 DOI: 10.1016/j.phro.2024.100578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 04/08/2024] [Accepted: 04/08/2024] [Indexed: 06/25/2024] Open
Abstract
Background and Purpose Automatic segmentation methods have greatly changed the RadioTherapy (RT) workflow, but still need to be extended to target volumes. In this paper, Deep Learning (DL) models were compared for Gross Tumor Volume (GTV) segmentation in locally advanced cervical cancer, and a novel investigation into failure detection was introduced by utilizing radiomic features. Methods and materials We trained eight DL models (UNet, VNet, SegResNet, SegResNetVAE) for 2D and 3D segmentation. Ensembling individually trained models during cross-validation generated the final segmentation. To detect failures, binary classifiers were trained using radiomic features extracted from segmented GTVs as inputs, aiming to classify contours based on whether their Dice Similarity Coefficient ( DSC ) < T and DSC ⩾ T . Two distinct cohorts of T2-Weighted (T2W) pre-RT MR images captured in 2D sequences were used: one retrospective cohort consisting of 115 LACC patients from 30 scanners, and the other prospective cohort, comprising 51 patients from 7 scanners, used for testing. Results Segmentation by 2D-SegResNet achieved the best DSC, Surface DSC (SDSC 3 mm ), and 95th Hausdorff Distance (95HD): DSC = 0.72 ± 0.16,SDSC 3 mm =0.66 ± 0.17, and 95HD = 14.6 ± 9.0 mm without missing segmentation ( M =0) on the test cohort. Failure detection could generate precision ( P = 0.88 ), recall ( R = 0.75 ), F1-score ( F = 0.81 ), and accuracy ( A = 0.86 ) using Logistic Regression (LR) classifier on the test cohort with a threshold T = 0.67 on DSC values. Conclusions Our study revealed that segmentation accuracy varies slightly among different DL methods, with 2D networks outperforming 3D networks in 2D MRI sequences. Doctors found the time-saving aspect advantageous. The proposed failure detection could guide doctors in sensitive cases.
Collapse
Affiliation(s)
- Rahimeh Rouhi
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Stéphane Niyoteka
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Alexandre Carré
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Samir Achkar
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Pierre-Antoine Laurent
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Mouhamadou Bachir Ba
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
- Radiotherapy Department of the University Hospital Center of Dalal Jamm, Guédiawaye, Senegal
| | - Cristina Veres
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Théophraste Henry
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
- Department of Medical Imaging, Gustave Roussy Cancer Campus, Villejuif, France
| | - Maria Vakalopoulou
- Laboratoire Mathématiques et Informatique pour la Complexité et les Systèmes, CentraleSupélec, Université Paris-Saclay, Gif-sur-Yvette, France
| | - Roger Sun
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Sophie Espenel
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Linda Mrissa
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Adrien Laville
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
| | - Cyrus Chargari
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Eric Deutsch
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| | - Charlotte Robert
- Université Paris-Saclay, Institut Gustave Roussy, Inserm, Radiothérapie Moléculaire et Innovation Thérapeutique, 94800 Villejuif, France
- Department of Radiation Oncology, Gustave Roussy Cancer Campus, Villejuif, France
| |
Collapse
|
7
|
Bottani S, Thibeau-Sutre E, Maire A, Ströer S, Dormont D, Colliot O, Burgos N. Contrast-enhanced to non-contrast-enhanced image translation to exploit a clinical data warehouse of T1-weighted brain MRI. BMC Med Imaging 2024; 24:67. [PMID: 38504179 PMCID: PMC10953143 DOI: 10.1186/s12880-024-01242-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 03/07/2024] [Indexed: 03/21/2024] Open
Abstract
BACKGROUND Clinical data warehouses provide access to massive amounts of medical images, but these images are often heterogeneous. They can for instance include images acquired both with or without the injection of a gadolinium-based contrast agent. Harmonizing such data sets is thus fundamental to guarantee unbiased results, for example when performing differential diagnosis. Furthermore, classical neuroimaging software tools for feature extraction are typically applied only to images without gadolinium. The objective of this work is to evaluate how image translation can be useful to exploit a highly heterogeneous data set containing both contrast-enhanced and non-contrast-enhanced images from a clinical data warehouse. METHODS We propose and compare different 3D U-Net and conditional GAN models to convert contrast-enhanced T1-weighted (T1ce) into non-contrast-enhanced (T1nce) brain MRI. These models were trained using 230 image pairs and tested on 77 image pairs from the clinical data warehouse of the Greater Paris area. RESULTS Validation using standard image similarity measures demonstrated that the similarity between real and synthetic T1nce images was higher than between real T1nce and T1ce images for all the models compared. The best performing models were further validated on a segmentation task. We showed that tissue volumes extracted from synthetic T1nce images were closer to those of real T1nce images than volumes extracted from T1ce images. CONCLUSION We showed that deep learning models initially developed with research quality data could synthesize T1nce from T1ce images of clinical quality and that reliable features could be extracted from the synthetic images, thus demonstrating the ability of such methods to help exploit a data set coming from a clinical data warehouse.
Collapse
Affiliation(s)
- Simona Bottani
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Elina Thibeau-Sutre
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Aurélien Maire
- Innovation & Données - Département des Services Numériques, AP-HP, Paris, 75013, France
| | - Sebastian Ströer
- Hôpital Pitié Salpêtrière, Department of Neuroradiology, AP-HP, Paris, 75012, France
| | - Didier Dormont
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, DMU DIAMENT, Paris, 75013, France
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Ninon Burgos
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France.
| |
Collapse
|
8
|
Thadikemalla VSG, Focke NK, Tummala S. A 3D Sparse Autoencoder for Fully Automated Quality Control of Affine Registrations in Big Data Brain MRI Studies. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:412-427. [PMID: 38343221 PMCID: PMC10976877 DOI: 10.1007/s10278-023-00933-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 10/13/2023] [Accepted: 10/24/2023] [Indexed: 03/02/2024]
Abstract
This paper presents a fully automated pipeline using a sparse convolutional autoencoder for quality control (QC) of affine registrations in large-scale T1-weighted (T1w) and T2-weighted (T2w) magnetic resonance imaging (MRI) studies. Here, a customized 3D convolutional encoder-decoder (autoencoder) framework is proposed and the network is trained in a fully unsupervised manner. For cross-validating the proposed model, we used 1000 correctly aligned MRI images of the human connectome project young adult (HCP-YA) dataset. We proposed that the quality of the registration is proportional to the reconstruction error of the autoencoder. Further, to make this method applicable to unseen datasets, we have proposed dataset-specific optimal threshold calculation (using the reconstruction error) from ROC analysis that requires a subset of the correctly aligned and artificially generated misalignments specific to that dataset. The calculated optimum threshold is used for testing the quality of remaining affine registrations from the corresponding datasets. The proposed framework was tested on four unseen datasets from autism brain imaging data exchange (ABIDE I, 215 subjects), information eXtraction from images (IXI, 577 subjects), Open Access Series of Imaging Studies (OASIS4, 646 subjects), and "Food and Brain" study (77 subjects). The framework has achieved excellent performance for T1w and T2w affine registrations with an accuracy of 100% for HCP-YA. Further, we evaluated the generality of the model on four unseen datasets and obtained accuracies of 81.81% for ABIDE I (only T1w), 93.45% (T1w) and 81.75% (T2w) for OASIS4, and 92.59% for "Food and Brain" study (only T1w) and in the range 88-97% for IXI (for both T1w and T2w and stratified concerning scanner vendor and magnetic field strengths). Moreover, the real failures from "Food and Brain" and OASIS4 datasets were detected with sensitivities of 100% and 80% for T1w and T2w, respectively. In addition, AUCs of > 0.88 in all scenarios were obtained during threshold calculation on the four test sets.
Collapse
Affiliation(s)
- Venkata Sainath Gupta Thadikemalla
- Department of Electronics and Communication Engineering, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, India.
| | - Niels K Focke
- Clinic for Neurology, University Medical Center, Göttingen, Germany
| | - Sudhakar Tummala
- Department of Electronics and Communication Engineering, School of Engineering and Sciences, SRM University-AP, Andhra Pradesh, India.
| |
Collapse
|
9
|
Hendriks J, Mutsaerts HJ, Joules R, Peña-Nogales Ó, Rodrigues PR, Wolz R, Burchell GL, Barkhof F, Schrantee A. A systematic review of (semi-)automatic quality control of T1-weighted MRI scans. Neuroradiology 2024; 66:31-42. [PMID: 38047983 PMCID: PMC10761394 DOI: 10.1007/s00234-023-03256-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 11/16/2023] [Indexed: 12/05/2023]
Abstract
PURPOSE Artifacts in magnetic resonance imaging (MRI) scans degrade image quality and thus negatively affect the outcome measures of clinical and research scanning. Considering the time-consuming and subjective nature of visual quality control (QC), multiple (semi-)automatic QC algorithms have been developed. This systematic review presents an overview of the available (semi-)automatic QC algorithms and software packages designed for raw, structural T1-weighted (T1w) MRI datasets. The objective of this review was to identify the differences among these algorithms in terms of their features of interest, performance, and benchmarks. METHODS We queried PubMed, EMBASE (Ovid), and Web of Science databases on the fifth of January 2023, and cross-checked reference lists of retrieved papers. Bias assessment was performed using PROBAST (Prediction model Risk Of Bias ASsessment Tool). RESULTS A total of 18 distinct algorithms were identified, demonstrating significant variations in methods, features, datasets, and benchmarks. The algorithms were categorized into rule-based, classical machine learning-based, and deep learning-based approaches. Numerous unique features were defined, which can be roughly divided into features capturing entropy, contrast, and normative measures. CONCLUSION Due to dataset-specific optimization, it is challenging to draw broad conclusions about comparative performance. Additionally, large variations exist in the used datasets and benchmarks, further hindering direct algorithm comparison. The findings emphasize the need for standardization and comparative studies for advancing QC in MR imaging. Efforts should focus on identifying a dataset-independent measure as well as algorithm-independent methods for assessing the relative performance of different approaches.
Collapse
Affiliation(s)
- Janine Hendriks
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location VUmc, PK -1, De Boelelaan 1117, Amsterdam, 1081 HV, The Netherlands.
| | - Henk-Jan Mutsaerts
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location VUmc, PK -1, De Boelelaan 1117, Amsterdam, 1081 HV, The Netherlands
| | | | | | | | - Robin Wolz
- IXICO Plc, London, EC1A 9PN, UK
- Imperial College London, London, SW7 2BX, UK
| | - George L Burchell
- Medical Library, Vrije Universiteit Amsterdam, Amsterdam, 1081 HV, The Netherlands
| | - Frederik Barkhof
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location VUmc, PK -1, De Boelelaan 1117, Amsterdam, 1081 HV, The Netherlands
- Queen Square Institute of Neurology and Centre for Medical Image Computing, University College London, London, WC1N 3BG, UK
| | - Anouk Schrantee
- Department of Radiology and Nuclear Medicine, Amsterdam UMC, Location AMC, Amsterdam, 1105 AZ, The Netherlands
| |
Collapse
|
10
|
Bottani S, Burgos N, Maire A, Saracino D, Ströer S, Dormont D, Colliot O. Evaluation of MRI-based machine learning approaches for computer-aided diagnosis of dementia in a clinical data warehouse. Med Image Anal 2023; 89:102903. [PMID: 37523918 DOI: 10.1016/j.media.2023.102903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Revised: 06/01/2023] [Accepted: 07/12/2023] [Indexed: 08/02/2023]
Abstract
A variety of algorithms have been proposed for computer-aided diagnosis of dementia from anatomical brain MRI. These approaches achieve high accuracy when applied to research data sets but their performance on real-life clinical routine data has not been evaluated yet. The aim of this work was to study the performance of such approaches on clinical routine data, based on a hospital data warehouse, and to compare the results to those obtained on a research data set. The clinical data set was extracted from the hospital data warehouse of the Greater Paris area, which includes 39 different hospitals. The research set was composed of data from the Alzheimer's Disease Neuroimaging Initiative data set. In the clinical set, the population of interest was identified by exploiting the diagnostic codes from the 10th revision of the International Classification of Diseases that are assigned to each patient. We studied how the imbalance of the training sets, in terms of contrast agent injection and image quality, may bias the results. We demonstrated that computer-aided diagnosis performance was strongly biased upwards (over 17 percent points of balanced accuracy) by the confounders of image quality and contrast agent injection, a phenomenon known as the Clever Hans effect or shortcut learning. When these biases were removed, the performance was very poor. In any case, the performance was considerably lower than on the research data set. Our study highlights that there are still considerable challenges for translating dementia computer-aided diagnosis systems to clinical routine.
Collapse
Affiliation(s)
- Simona Bottani
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | - Ninon Burgos
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France
| | | | - Dario Saracino
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France; IM2A, Reference Centre for Rare or Early-Onset Dementias, Département de Neurologie, AP-HP, Hôpital de la Pitié Salpêtrière, Paris, 75013, France
| | - Sebastian Ströer
- AP-HP, Hôpital de la Pitié Salpêtrière, Department of Neuroradiology, Paris, 75013, France
| | - Didier Dormont
- AP-HP, Hôpital de la Pitié Salpêtrière, Department of Neuroradiology, Paris, 75013, France; Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, DMU DIAMENT, Paris, 75013, France
| | - Olivier Colliot
- Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, Paris, 75013, France.
| |
Collapse
|
11
|
Kushol R, Wilman AH, Kalra S, Yang YH. DSMRI: Domain Shift Analyzer for Multi-Center MRI Datasets. Diagnostics (Basel) 2023; 13:2947. [PMID: 37761314 PMCID: PMC10527875 DOI: 10.3390/diagnostics13182947] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 09/05/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023] Open
Abstract
In medical research and clinical applications, the utilization of MRI datasets from multiple centers has become increasingly prevalent. However, inherent variability between these centers presents challenges due to domain shift, which can impact the quality and reliability of the analysis. Regrettably, the absence of adequate tools for domain shift analysis hinders the development and validation of domain adaptation and harmonization techniques. To address this issue, this paper presents a novel Domain Shift analyzer for MRI (DSMRI) framework designed explicitly for domain shift analysis in multi-center MRI datasets. The proposed model assesses the degree of domain shift within an MRI dataset by leveraging various MRI-quality-related metrics derived from the spatial domain. DSMRI also incorporates features from the frequency domain to capture low- and high-frequency information about the image. It further includes the wavelet domain features by effectively measuring the sparsity and energy present in the wavelet coefficients. Furthermore, DSMRI introduces several texture features, thereby enhancing the robustness of the domain shift analysis process. The proposed framework includes visualization techniques such as t-SNE and UMAP to demonstrate that similar data are grouped closely while dissimilar data are in separate clusters. Additionally, quantitative analysis is used to measure the domain shift distance, domain classification accuracy, and the ranking of significant features. The effectiveness of the proposed approach is demonstrated using experimental evaluations on seven large-scale multi-site neuroimaging datasets.
Collapse
Affiliation(s)
- Rafsanjany Kushol
- Department of Computing Science, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Alan H. Wilman
- Departments of Radiology and Diagnostic Imaging and Biomedical Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Sanjay Kalra
- Department of Computing Science, University of Alberta, Edmonton, AB T6G 2R3, Canada
- Division of Neurology, Department of Medicine, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Yee-Hong Yang
- Department of Computing Science, University of Alberta, Edmonton, AB T6G 2R3, Canada
| |
Collapse
|
12
|
Vakli P, Weiss B, Szalma J, Barsi P, Gyuricza I, Kemenczky P, Somogyi E, Nárai Á, Gál V, Hermann P, Vidnyánszky Z. Automatic brain MRI motion artifact detection based on end-to-end deep learning is similarly effective as traditional machine learning trained on image quality metrics. Med Image Anal 2023; 88:102850. [PMID: 37263108 DOI: 10.1016/j.media.2023.102850] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 04/28/2023] [Accepted: 05/22/2023] [Indexed: 06/03/2023]
Abstract
Head motion artifacts in magnetic resonance imaging (MRI) are an important confounding factor concerning brain research as well as clinical practice. For this reason, several machine learning-based methods have been developed for the automatic quality control of structural MRI scans. Deep learning offers a promising solution to this problem, however, given its data-hungry nature and the scarcity of expert-annotated datasets, its advantage over traditional machine learning methods in identifying motion-corrupted brain scans is yet to be determined. In the present study, we investigated the relative advantage of the two methods in structural MRI quality control. To this end, we collected publicly available T1-weighted images and scanned subjects in our own lab under conventional and active head motion conditions. The quality of the images was rated by a team of radiologists from the point of view of clinical diagnostic use. We present a relatively simple, lightweight 3D convolutional neural network trained in an end-to-end manner that achieved a test set (N = 411) balanced accuracy of 94.41% in classifying brain scans into clinically usable or unusable categories. A support vector machine trained on image quality metrics achieved a balanced accuracy of 88.44% on the same test set. Statistical comparison of the two models yielded no significant difference in terms of confusion matrices, error rates, or receiver operating characteristic curves. Our results suggest that these machine learning methods are similarly effective in identifying severe motion artifacts in brain MRI scans, and underline the efficacy of end-to-end deep learning-based systems in brain MRI quality control, allowing the rapid evaluation of diagnostic utility without the need for elaborate image pre-processing.
Collapse
Affiliation(s)
- Pál Vakli
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary.
| | - Béla Weiss
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary.
| | - János Szalma
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Péter Barsi
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - István Gyuricza
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Péter Kemenczky
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Eszter Somogyi
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Ádám Nárai
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Viktor Gál
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Petra Hermann
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary
| | - Zoltán Vidnyánszky
- Brain Imaging Centre, Research Centre for Natural Sciences, Budapest 1117, Hungary.
| |
Collapse
|
13
|
Zhang H, Liu Y, Wang Y, Ma Y, Niu N, Jing H, Huo L. Deep learning model for automatic image quality assessment in PET. BMC Med Imaging 2023; 23:75. [PMID: 37277706 DOI: 10.1186/s12880-023-01017-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 04/27/2023] [Indexed: 06/07/2023] Open
Abstract
BACKGROUND A variety of external factors might seriously degrade PET image quality and lead to inconsistent results. The aim of this study is to explore a potential PET image quality assessment (QA) method with deep learning (DL). METHODS A total of 89 PET images were acquired from Peking Union Medical College Hospital (PUMCH) in China in this study. Ground-truth quality for images was assessed by two senior radiologists and classified into five grades (grade 1, grade 2, grade 3, grade 4, and grade 5). Grade 5 is the best image quality. After preprocessing, the Dense Convolutional Network (DenseNet) was trained to automatically recognize optimal- and poor-quality PET images. Accuracy (ACC), sensitivity, specificity, receiver operating characteristic curve (ROC), and area under the ROC Curve (AUC) were used to evaluate the diagnostic properties of all models. All indicators of models were assessed using fivefold cross-validation. An image quality QA tool was developed based on our deep learning model. A PET QA report can be automatically obtained after inputting PET images. RESULTS Four tasks were generated. Task2 showed worst performance in AUC,ACC, specificity and sensitivity among 4 tasks, and task1 showed unstable performance between training and testing and task3 showed low specificity in both training and testing. Task 4 showed the best diagnostic properties and discriminative performance between poor image quality (grade 1, grade 2) and good quality (grade 3, grade 4, grade 5) images. The automated quality assessment of task 4 showed ACC = 0.77, specificity = 0.71, and sensitivity = 0.83, in the train set; ACC = 0.85, specificity = 0.79, and sensitivity = 0.91, in the test set, respectively. The ROC measuring performance of task 4 had an AUC of 0.86 in the train set and 0.91 in the test set. The image QA tool could output basic information of images, scan and reconstruction parameters, typical instances of PET images, and deep learning score. CONCLUSIONS This study highlights the feasibility of the assessment of image quality in PET images using a deep learning model, which may assist with accelerating clinical research by reliably assessing image quality.
Collapse
Affiliation(s)
- Haiqiong Zhang
- Department of Nuclear Medicine, State Key Laboratory of Complex Severe and Rare Diseases, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China
- Medical Science Research Center, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Yu Liu
- Department of Nuclear Medicine, State Key Laboratory of Complex Severe and Rare Diseases, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Yanmei Wang
- GE Healthcare China, Shanghai, 200040, China
| | - Yanru Ma
- Department of Nuclear Medicine, State Key Laboratory of Complex Severe and Rare Diseases, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Na Niu
- Department of Nuclear Medicine, State Key Laboratory of Complex Severe and Rare Diseases, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Hongli Jing
- Department of Nuclear Medicine, State Key Laboratory of Complex Severe and Rare Diseases, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China
| | - Li Huo
- Department of Nuclear Medicine, State Key Laboratory of Complex Severe and Rare Diseases, Beijing Key Laboratory of Molecular Targeted Diagnosis and Therapy in Nuclear Medicine, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences, Beijing, 100730, China.
| |
Collapse
|
14
|
Liu C, Huang F, Qiu A. Monte Carlo Ensemble Neural Network for the diagnosis of Alzheimer's disease. Neural Netw 2023; 159:14-24. [PMID: 36525914 DOI: 10.1016/j.neunet.2022.10.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Revised: 10/13/2022] [Accepted: 10/31/2022] [Indexed: 11/25/2022]
Abstract
Convolutional neural networks (CNNs) have been increasingly used in the computer-aided diagnosis of Alzheimer's Disease (AD). This study takes the advantage of the 2D-slice CNN fast computation and ensemble approaches to develop a Monte Carlo Ensemble Neural Network (MCENN) by introducing Monte Carlo sampling and an ensemble neural network in the integration with ResNet50. Our goals are to improve the 2D-slice CNN performance and to design the MCENN model insensitive to image resolution. Unlike traditional ensemble approaches with multiple base learners, our MCENN model incorporates one neural network learner and generates a large number of possible classification decisions via Monte Carlo sampling of feature importance within the combined slices. This can overcome the main weakness of the lack of 3D brain anatomical information in 2D-slice CNNs and develop a neural network to learn the 3D relevance of the features across multiple slices. Brain images from Alzheimer's Disease Neuroimaging Initiative (ADNI, 7199 scans), the Open Access Series of Imaging Studies-3 (OASIS-3, 1992 scans), and a clinical sample (239 scans) are used to evaluate the performance of the MCENN model for the classification of cognitively normal (CN), patients with mild cognitive impairment (MCI) and AD. Our MCENN with a small number of slices and minimal image processing (rigid transformation, intensity normalization, skull stripping) achieves the AD classification accuracy of 90%, better than existing 2D-slice CNNs (accuracy: 63%∼84%) and 3D CNNs (accuracy: 74%∼88%). Furthermore, the MCENN is robust to be trained in the ADNI dataset and applied to the OASIS-3 dataset and the clinical sample. Our experiments show that the AD classification accuracy of the MCENN model is comparable when using high- and low-resolution brain images, suggesting the insensitivity of the MCENN to image resolution. Hence, the MCENN does not require high-resolution 3D brain structural images and comprehensive image processing, which supports its potential use in a clinical setting.
Collapse
Affiliation(s)
- Chaoqiang Liu
- Department of Biomedical Engineering, National University of Singapore, Singapore
| | - Fei Huang
- School of Computer Engineering and Science, Shanghai University, China
| | - Anqi Qiu
- Department of Biomedical Engineering, National University of Singapore, Singapore; NUS (Suzhou) Research Institute, National University of Singapore, China; School of Computer Engineering and Science, Shanghai University, China; Institute of Data Science, National University of Singapore, Singapore; The N.1 Institute for Health, National University of Singapore, Singapore; The Johns Hopkins University, MD, USA.
| | | |
Collapse
|