1
|
Zotova D, Pinon N, Trombetta R, Bouet R, Jung J, Lartizien C. GAN-based synthetic FDG PET images from T1 brain MRI can serve to improve performance of deep unsupervised anomaly detection models. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2025; 265:108727. [PMID: 40187100 DOI: 10.1016/j.cmpb.2025.108727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/26/2024] [Revised: 02/13/2025] [Accepted: 03/14/2025] [Indexed: 04/07/2025]
Abstract
BACKGROUND AND OBJECTIVE Research in the cross-modal medical image translation domain has been very productive over the past few years in tackling the scarce availability of large curated multi-modality datasets with the promising performance of GAN-based architectures. However, only a few of these studies assessed task-based related performance of these synthetic data, especially for the training of deep models. METHODS We design and compare different GAN-based frameworks for generating synthetic brain[18F]fluorodeoxyglucose (FDG) PET images from T1 weighted MRI data. We first perform standard qualitative and quantitative visual quality evaluation. Then, we explore further impact of using these fake PET data in the training of a deep unsupervised anomaly detection (UAD) model designed to detect subtle epilepsy lesions in T1 MRI and FDG PET images. We introduce novel diagnostic task-oriented quality metrics of the synthetic FDG PET data tailored to our unsupervised detection task, then use these fake data to train a use case UAD model combining a deep representation learning based on siamese autoencoders with a OC-SVM density support estimation model. This model is trained on normal subjects only and allows the detection of any variation from the pattern of the normal population. We compare the detection performance of models trained on 35 paired real MR T1 of normal subjects paired either on 35 true PET images or on 35 synthetic PET images generated from the best performing generative models. Performance analysis is conducted on 17 exams of epilepsy patients undergoing surgery. RESULTS The best performing GAN-based models allow generating realistic fake PET images of control subject with SSIM and PSNR values around 0.9 and 23.8, respectively and in distribution (ID) with regard to the true control dataset. The best UAD model trained on these synthetic normative PET data allows reaching 74% sensitivity. CONCLUSION Our results confirm that GAN-based models are the best suited for MR T1 to FDG PET translation, outperforming transformer or diffusion models. We also demonstrate the diagnostic value of these synthetic data for the training of UAD models and evaluation on clinical exams of epilepsy patients. Our code and the normative image dataset are available.
Collapse
Affiliation(s)
- Daria Zotova
- INSA Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69621, France
| | - Nicolas Pinon
- INSA Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69621, France
| | - Robin Trombetta
- INSA Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69621, France
| | - Romain Bouet
- Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, Univ Lyon 1, Bron, 69500, France
| | - Julien Jung
- Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, Univ Lyon 1, Bron, 69500, France
| | - Carole Lartizien
- INSA Lyon, Université Claude Bernard Lyon 1, CNRS, Inserm, CREATIS UMR 5220, U1294, Lyon, F-69621, France.
| |
Collapse
|
2
|
Tagawa H, Fushimi Y, Fujimoto K, Nakajima S, Okuchi S, Sakata A, Otani S, Wicaksono KP, Wang Y, Ikeda S, Ito S, Umehana M, Shimotake A, Kuzuya A, Nakamoto Y. Generation of high-resolution MPRAGE-like images from 3D head MRI localizer (AutoAlign Head) images using a deep learning-based model. Jpn J Radiol 2025; 43:761-769. [PMID: 39794660 DOI: 10.1007/s11604-024-01728-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2024] [Accepted: 12/22/2024] [Indexed: 01/13/2025]
Abstract
PURPOSE Magnetization prepared rapid gradient echo (MPRAGE) is a useful three-dimensional (3D) T1-weighted sequence, but is not a priority in routine brain examinations. We hypothesized that converting 3D MRI localizer (AutoAlign Head) images to MPRAGE-like images with deep learning (DL) would be beneficial for diagnosing and researching dementia and neurodegenerative diseases. We aimed to establish and evaluate a DL-based model for generating MPRAGE-like images from MRI localizers. MATERIALS AND METHODS Brain MRI examinations including MPRAGE taken at a single institution for investigation of mild cognitive impairment, dementia and epilepsy between January 2020 and December 2022 were included retrospectively. Images taken in 2020 or 2021 were assigned to training and validation datasets, and images from 2022 were used for the test dataset. Using the training and validation set, we determined one model using visual evaluation by radiologists with reference to image quality metrics of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). The test dataset was evaluated by visual assessment and quality metrics. Voxel-based morphometric analysis was also performed, and we evaluated Dice score and volume differences between generated and original images of major structures were calculated as absolute symmetrized percent change. RESULTS Training, validation, and test datasets comprised 340 patients (mean age, 56.1 ± 24.4 years; 195 women), 36 patients (67.3 ± 18.3 years, 20 women), and 193 patients (59.5 ± 24.4 years; 111 women), respectively. The test dataset showed: PSNR, 35.4 ± 4.91; SSIM, 0.871 ± 0.058; and LPIPS 0.045 ± 0.017. No overfitting was observed. Dice scores for the segmentation of main structures ranged from 0.788 (left amygdala) to 0.926 (left ventricle). Quadratic weighted Cohen kappa values of visual score for medial temporal lobe between original and generated images were 0.80-0.88. CONCLUSION Images generated using our DL-based model can be used for post-processing and visual evaluation of medial temporal lobe atrophy.
Collapse
Affiliation(s)
- Hiroshi Tagawa
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan.
| | - Koji Fujimoto
- Department of Advanced Imaging in Medical Magnetic Resonance, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Satoshi Nakajima
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | - Sachi Okuchi
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | - Akihiko Sakata
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | - Sayo Otani
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | | | - Yang Wang
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | - Satoshi Ikeda
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | - Shuichi Ito
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | - Masaki Umehana
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| | | | - Akira Kuzuya
- Department of Neurology, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Yuji Nakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-Cho, Sakyo-Ku, Kyoto, 606-8507, Japan
| |
Collapse
|
3
|
Currie GM, Hawk KE, Rohren EM. Generative Artificial Intelligence Biases, Limitations and Risks in Nuclear Medicine: An Argument for Appropriate Use Framework and Recommendations. Semin Nucl Med 2025; 55:423-436. [PMID: 38851934 DOI: 10.1053/j.semnuclmed.2024.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2024] [Accepted: 05/16/2024] [Indexed: 06/10/2024]
Abstract
Generative artificial intelligence (AI) algorithms for both text-to-text and text-to-image applications have seen rapid and widespread adoption in the general and medical communities. While limitations of generative AI have been widely reported, there remain valuable applications in patient and professional communities. Here, the limitations and biases of both text-to-text and text-to-image generative AI are explored using purported applications in medical imaging as case examples. A direct comparison of the capabilities of four common text-to-image generative AI algorithms is reported and recommendations for the most appropriate use, DALL-E 3, justified. The risks use and biases are outlined, and appropriate use guidelines framed for use of generative AI in nuclear medicine. Generative AI text-to-text and text-to-image generation includes inherent biases, particularly gender and ethnicity, that could misrepresent nuclear medicine. The assimilation of generative AI tools into medical education, image interpretation, patient education, health promotion and marketing in nuclear medicine risks propagating errors and amplification of biases. Mitigation strategies should reside inside appropriate use criteria and minimum standards for quality and professionalism for the application of generative AI in nuclear medicine.
Collapse
Affiliation(s)
- Geoffrey M Currie
- School of Dentistry and Medical Sciences, Charles Sturt University, Wagga Wagga, Australia; Dept of Radiology, Baylor College of Medicine, Houston.
| | - K Elizabeth Hawk
- School of Dentistry and Medical Sciences, Charles Sturt University, Wagga Wagga, Australia; Dept of Radiology, Stanford University, Stanford
| | - Eric M Rohren
- School of Dentistry and Medical Sciences, Charles Sturt University, Wagga Wagga, Australia; Dept of Radiology, Baylor College of Medicine, Houston
| |
Collapse
|
4
|
Solak M, Tören M, Asan B, Kaba E, Beyazal M, Çeliker FB. Generative Adversarial Network Based Contrast Enhancement: Synthetic Contrast Brain Magnetic Resonance Imaging. Acad Radiol 2025; 32:2220-2232. [PMID: 39694785 DOI: 10.1016/j.acra.2024.11.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2024] [Revised: 11/05/2024] [Accepted: 11/10/2024] [Indexed: 12/20/2024]
Abstract
RATIONALE AND OBJECTIVES Magnetic resonance imaging (MRI) is a vital tool for diagnosing neurological disorders, frequently utilising gadolinium-based contrast agents (GBCAs) to enhance resolution and specificity. However, GBCAs present certain risks, including side effects, increased costs, and repeated exposure. This study proposes an innovative approach using generative adversarial networks (GANs) for virtual contrast enhancement in brain MRI, with the aim of reducing or eliminating GBCAs, minimising associated risks, and enhancing imaging efficiency while preserving diagnostic quality. MATERIAL AND METHODS In this study, 10,235 images were acquired in a 3.0 Tesla MRI scanner from 81 participants (54 females, 27 males; mean age 35 years, range 19-68 years). T1-weighted and contrast-enhanced images were obtained following the administration of a standard dose of a GBCA. In order to generate "synthetic" images for contrast-enhanced T1-weighted, a CycleGAN model, a sub-model of the GAN structure, was trained to process pre- and post-contrast images. The dataset was divided into three subsets: 80% for training, 10% for validation, and 10% for testing. TensorBoard was employed to prevent image deterioration throughout the training phase, and the image processing and training procedures were optimised. The radiologists were presented with a non-contrast input image and asked to choose between a real contrast-enhanced image and synthetic MR images generated by CycleGAN corresponding to this non-contrast MR image (Turing test). RESULTS The performance of the CycleGAN model was evaluated using a combination of quantitative and qualitative analyses. For the entire dataset, in the test set, the mean square error (MSE) was 0.0038, while the structural similarity index (SSIM) was 0.58. Among the submodels, the most successful model achieved an MSE of 0.0053, while the SSIM was 0.8. The qualitative evaluation was validated through a visual Turing test conducted by four radiologists with varying levels of clinical experience. CONCLUSION The findings of this study support the efficacy of the CycleGAN model in generating synthetic contrast-enhanced T1-weighted brain MR images. Both quantitative and qualitative evaluations demonstrated excellent performance, confirming the model's ability to produce realistic synthetic images. This method shows promise in potentially eliminating the need for intravenous contrast agents, thereby minimising the associated risks of their use.
Collapse
Affiliation(s)
- Merve Solak
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (M.S., E.K., M.B., F.B.C.)
| | - Murat Tören
- Recep Tayyip Erdogan University, Department of Electrical and Electronics Engineering, Rize, Turkey (M.T., B.A.)
| | - Berkutay Asan
- Recep Tayyip Erdogan University, Department of Electrical and Electronics Engineering, Rize, Turkey (M.T., B.A.)
| | - Esat Kaba
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (M.S., E.K., M.B., F.B.C.)
| | - Mehmet Beyazal
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (M.S., E.K., M.B., F.B.C.)
| | - Fatma Beyazal Çeliker
- Recep Tayyip Erdogan University, Department of Radiology, Rize, Turkey (M.S., E.K., M.B., F.B.C.).
| |
Collapse
|
5
|
Remtulla R, Samet A, Kulbay M, Akdag A, Hocini A, Volniansky A, Kahn Ali S, Qian CX. A Future Picture: A Review of Current Generative Adversarial Neural Networks in Vitreoretinal Pathologies and Their Future Potentials. Biomedicines 2025; 13:284. [PMID: 40002698 PMCID: PMC11852121 DOI: 10.3390/biomedicines13020284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Revised: 01/06/2025] [Accepted: 01/14/2025] [Indexed: 02/27/2025] Open
Abstract
Machine learning has transformed ophthalmology, particularly in predictive and discriminatory models for vitreoretinal pathologies. However, generative modeling, especially generative adversarial networks (GANs), remains underexplored. GANs consist of two neural networks-the generator and discriminator-that work in opposition to synthesize highly realistic images. These synthetic images can enhance diagnostic accuracy, expand the capabilities of imaging technologies, and predict treatment responses. GANs have already been applied to fundus imaging, optical coherence tomography (OCT), and fluorescein autofluorescence (FA). Despite their potential, GANs face challenges in reliability and accuracy. This review explores GAN architecture, their advantages over other deep learning models, and their clinical applications in retinal disease diagnosis and treatment monitoring. Furthermore, we discuss the limitations of current GAN models and propose novel applications combining GANs with OCT, OCT-angiography, fluorescein angiography, fundus imaging, electroretinograms, visual fields, and indocyanine green angiography.
Collapse
Affiliation(s)
- Raheem Remtulla
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 3SE, Canada; (R.R.); (M.K.)
| | - Adam Samet
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 3SE, Canada; (R.R.); (M.K.)
| | - Merve Kulbay
- Department of Ophthalmology & Visual Sciences, McGill University, Montreal, QC H4A 3SE, Canada; (R.R.); (M.K.)
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada
| | - Arjin Akdag
- Faculty of Medicine and Health Sciences, McGill University, Montreal, QC H3G 2M1, Canada
| | - Adam Hocini
- Faculty of Medicine, Université de Montréal, Montreal, QC H3T 1J4, Canada
| | - Anton Volniansky
- Department of Psychiatry, Université Laval, Quebec City, QC G1V 0A6, Canada
| | - Shigufa Kahn Ali
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada
- Department of Ophthalmology, Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, University of Montreal, Montreal, QC H1T 2M4, Canada
| | - Cynthia X. Qian
- Centre de Recherche de l’Hôpital Maisonneuve-Rosemont, Université de Montréal, Montreal, QC H1T 2M4, Canada
- Department of Ophthalmology, Centre Universitaire d’Ophtalmologie (CUO), Hôpital Maisonneuve-Rosemont, University of Montreal, Montreal, QC H1T 2M4, Canada
| |
Collapse
|
6
|
Mishra V. Five Dimensions of AI Readiness (AIR-5D) Framework- A Preparedness Assessment Tool for Healthcare Organizations. Hosp Top 2024:1-8. [PMID: 39543793 DOI: 10.1080/00185868.2024.2427641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2024]
Abstract
Background: Artificial Intelligence (AI) has transformative potential in healthcare, and it is very useful in areas such as drug discovery, diagnostics, and patient management. However, there is a lack of tools to assess healthcare organizations' readiness to adopt AI technologies. This study introduces the AI Readiness Five Dimension (AIR-5D) framework, addressing this gap. Methods: The AIR-5D framework was developed using a two-step process: identifying dimensions of AI readiness from literature and weighing these dimensions through expert focus groups. The Analytical Hierarchy Process (AHP) was employed to calculate weights, ensuring consistency and reliability. Results: The results identified five key dimensions: Opportunity Discovery (0.44), Data Management 90.22), IT Environment and Security (0.194), Risk Privacy and Governance (0.101), and Adoption of Technology (0.043). "Opportunity Discovery" was the most critical dimension, while "Adoption of Technology" ranked lowest. Six case studies demonstrated varying AI readiness (score between 3 and 4 on a scale of 5), highlighting challenges in moving beyond AI collaboration to optimization. Conclusions: The AIR-5D framework offers a structured approach for healthcare organizations to assess and enhance their AI readiness. It emphasizes the importance of understanding value, robust data management, and strategic alignment in successful AI adoption.
Collapse
Affiliation(s)
- Vinaytosh Mishra
- Thumbay Institute for AI in Healthcare, Gulf Medical University, Ajman, United Arab Emirates
| |
Collapse
|
7
|
Park SH, Han K, Lee JG. Conceptual review of outcome metrics and measures used in clinical evaluation of artificial intelligence in radiology. LA RADIOLOGIA MEDICA 2024; 129:1644-1655. [PMID: 39225919 DOI: 10.1007/s11547-024-01886-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2024] [Accepted: 08/21/2024] [Indexed: 09/04/2024]
Abstract
Artificial intelligence (AI) has numerous applications in radiology. Clinical research studies to evaluate the AI models are also diverse. Consequently, diverse outcome metrics and measures are employed in the clinical evaluation of AI, presenting a challenge for clinical radiologists. This review aims to provide conceptually intuitive explanations of the outcome metrics and measures that are most frequently used in clinical research, specifically tailored for clinicians. While we briefly discuss performance metrics for AI models in binary classification, detection, or segmentation tasks, our primary focus is on less frequently addressed topics in published literature. These include metrics and measures for evaluating multiclass classification; those for evaluating generative AI models, such as models used in image generation or modification and large language models; and outcome measures beyond performance metrics, including patient-centered outcome measures. Our explanations aim to guide clinicians in the appropriate use of these metrics and measures.
Collapse
Affiliation(s)
- Seong Ho Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-Ro 43-Gil, Songpa-Gu, Seoul, 05505, South Korea.
| | - Kyunghwa Han
- Department of Radiology, Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, South Korea
| | - June-Goo Lee
- Biomedical Engineering Research Center, Asan Institute for Life Sciences, University of Ulsan College of Medicine, Seoul, South Korea
| |
Collapse
|
8
|
Avanzo M, Stancanello J, Pirrone G, Drigo A, Retico A. The Evolution of Artificial Intelligence in Medical Imaging: From Computer Science to Machine and Deep Learning. Cancers (Basel) 2024; 16:3702. [PMID: 39518140 PMCID: PMC11545079 DOI: 10.3390/cancers16213702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2024] [Revised: 10/26/2024] [Accepted: 10/29/2024] [Indexed: 11/16/2024] Open
Abstract
Artificial intelligence (AI), the wide spectrum of technologies aiming to give machines or computers the ability to perform human-like cognitive functions, began in the 1940s with the first abstract models of intelligent machines. Soon after, in the 1950s and 1960s, machine learning algorithms such as neural networks and decision trees ignited significant enthusiasm. More recent advancements include the refinement of learning algorithms, the development of convolutional neural networks to efficiently analyze images, and methods to synthesize new images. This renewed enthusiasm was also due to the increase in computational power with graphical processing units and the availability of large digital databases to be mined by neural networks. AI soon began to be applied in medicine, first through expert systems designed to support the clinician's decision and later with neural networks for the detection, classification, or segmentation of malignant lesions in medical images. A recent prospective clinical trial demonstrated the non-inferiority of AI alone compared with a double reading by two radiologists on screening mammography. Natural language processing, recurrent neural networks, transformers, and generative models have both improved the capabilities of making an automated reading of medical images and moved AI to new domains, including the text analysis of electronic health records, image self-labeling, and self-reporting. The availability of open-source and free libraries, as well as powerful computing resources, has greatly facilitated the adoption of deep learning by researchers and clinicians. Key concerns surrounding AI in healthcare include the need for clinical trials to demonstrate efficacy, the perception of AI tools as 'black boxes' that require greater interpretability and explainability, and ethical issues related to ensuring fairness and trustworthiness in AI systems. Thanks to its versatility and impressive results, AI is one of the most promising resources for frontier research and applications in medicine, in particular for oncological applications.
Collapse
Affiliation(s)
- Michele Avanzo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | | | - Giovanni Pirrone
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | - Annalisa Drigo
- Medical Physics Department, Centro di Riferimento Oncologico di Aviano (CRO) IRCCS, 33081 Aviano, Italy; (G.P.); (A.D.)
| | - Alessandra Retico
- National Institute for Nuclear Physics (INFN), Pisa Division, 56127 Pisa, Italy;
| |
Collapse
|
9
|
Jung HK, Kim K, Park JE, Kim N. Image-Based Generative Artificial Intelligence in Radiology: Comprehensive Updates. Korean J Radiol 2024; 25:959-981. [PMID: 39473088 PMCID: PMC11524689 DOI: 10.3348/kjr.2024.0392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 08/29/2024] [Accepted: 08/29/2024] [Indexed: 11/02/2024] Open
Abstract
Generative artificial intelligence (AI) has been applied to images for image quality enhancement, domain transfer, and augmentation of training data for AI modeling in various medical fields. Image-generative AI can produce large amounts of unannotated imaging data, which facilitates multiple downstream deep-learning tasks. However, their evaluation methods and clinical utility have not been thoroughly reviewed. This article summarizes commonly used generative adversarial networks and diffusion models. In addition, it summarizes their utility in clinical tasks in the field of radiology, such as direct image utilization, lesion detection, segmentation, and diagnosis. This article aims to guide readers regarding radiology practice and research using image-generative AI by 1) reviewing basic theories of image-generative AI, 2) discussing the methods used to evaluate the generated images, 3) outlining the clinical and research utility of generated images, and 4) discussing the issue of hallucinations.
Collapse
Affiliation(s)
- Ha Kyung Jung
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Kiduk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| |
Collapse
|
10
|
Yang B, Liu Y, Wei R, Men K, Dai J. Deep learning method for predicting weekly anatomical changes in patients with nasopharyngeal carcinoma during radiotherapy. Med Phys 2024; 51:7998-8009. [PMID: 39225585 DOI: 10.1002/mp.17381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 07/01/2024] [Accepted: 07/08/2024] [Indexed: 09/04/2024] Open
Abstract
BACKGROUND Patients may undergo anatomical changes during radiotherapy, leading to an underdosing of the target or overdosing of the organs at risk (OARs). PURPOSE This study developed a deep-learning method to predict the tumor response of patients with nasopharyngeal carcinoma (NPC) during treatment. This method can predict the anatomical changes of a patient. METHODS The participants included 230 patients with NPC. The data included planning computed tomography (pCT) and routine cone-beam CT (CBCT) images. The CBCT image quality was improved to the CT level using an advanced method. A long short-term memory network-generative adversarial network (LSTM-GAN) is proposed, which can harness the forecasting ability of LSTM and the generation ability of GAN. Four models were trained to predict the anatomical changes that occurred in weeks 3-6 and named LSTM-GAN-week 3 to LSTM-GAN-week 6. The pCT and CBCT were used as input, and the tumor target volumes (TVs) and OARs were delineated on the predicted and real images (ground truth). Finally, the models were evaluated using contours and dosimetry parameters. RESULTS The proposed method predicted the anatomical changes, with a dice similarity coefficient above 0.94 and 0.90 for the TVs and surrounding OARs, respectively. The dosimetry parameters were close between the prediction and ground truth. The deviations in the prescription, minimum, and maximum doses of the tumor targets were below 0.5 Gy. For serial organs (brain stem and spinal cord), the deviations in the maximum dose were below 0.6 Gy. For parallel organs (bilateral parotid glands), the deviations in the mean dose were below 0.8 Gy. CONCLUSION The proposed method can predict the tumor response to radiotherapy in the future such that adaptation can be scheduled on time. This study provides a proactive mechanism for planning adaptation, which can enable personalized treatment and save clinical time by anticipating and preparing for treatment strategy adjustments.
Collapse
Affiliation(s)
- Bining Yang
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Yuxiang Liu
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Ran Wei
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Kuo Men
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| | - Jianrong Dai
- National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, China
| |
Collapse
|
11
|
Weichert J, Scharf JL. Advancements in Artificial Intelligence for Fetal Neurosonography: A Comprehensive Review. J Clin Med 2024; 13:5626. [PMID: 39337113 PMCID: PMC11432922 DOI: 10.3390/jcm13185626] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2024] [Revised: 09/04/2024] [Accepted: 09/16/2024] [Indexed: 09/30/2024] Open
Abstract
The detailed sonographic assessment of the fetal neuroanatomy plays a crucial role in prenatal diagnosis, providing valuable insights into timely, well-coordinated fetal brain development and detecting even subtle anomalies that may impact neurodevelopmental outcomes. With recent advancements in artificial intelligence (AI) in general and medical imaging in particular, there has been growing interest in leveraging AI techniques to enhance the accuracy, efficiency, and clinical utility of fetal neurosonography. The paramount objective of this focusing review is to discuss the latest developments in AI applications in this field, focusing on image analysis, the automation of measurements, prediction models of neurodevelopmental outcomes, visualization techniques, and their integration into clinical routine.
Collapse
Affiliation(s)
- Jan Weichert
- Division of Prenatal Medicine, Department of Gynecology and Obstetrics, University Hospital of Schleswig-Holstein, Ratzeburger Allee 160, 23538 Luebeck, Germany;
- Elbe Center of Prenatal Medicine and Human Genetics, Willy-Brandt-Str. 1, 20457 Hamburg, Germany
| | - Jann Lennard Scharf
- Division of Prenatal Medicine, Department of Gynecology and Obstetrics, University Hospital of Schleswig-Holstein, Ratzeburger Allee 160, 23538 Luebeck, Germany;
| |
Collapse
|
12
|
Pantanowitz J, Manko CD, Pantanowitz L, Rashidi HH. Synthetic Data and Its Utility in Pathology and Laboratory Medicine. J Transl Med 2024; 104:102095. [PMID: 38925488 DOI: 10.1016/j.labinv.2024.102095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2024] [Revised: 06/03/2024] [Accepted: 06/12/2024] [Indexed: 06/28/2024] Open
Abstract
In our rapidly expanding landscape of artificial intelligence, synthetic data have become a topic of great promise and also some concern. This review aimed to provide pathologists and laboratory professionals with a primer on the role of synthetic data and how it may soon shape the landscape within our field. Using synthetic data presents many advantages but also introduces a milieu of new obstacles and limitations. This review aimed to provide pathologists and laboratory professionals with a primer on the general concept of synthetic data and its potential to transform our field. By leveraging synthetic data, we can help accelerate the development of various machine learning models and enhance our medical education and research/quality study needs. This review explored the methods for generating synthetic data, including rule-based, machine learning model-based and hybrid approaches, as they apply to applications within pathology and laboratory medicine. We also discussed the limitations and challenges associated with such synthetic data, including data quality, malicious use, and ethical bias/concerns and challenges. By understanding the potential benefits (ie, medical education, training artificial intelligence programs, and proficiency testing, etc) and limitations of this new data realm, we can not only harness its power to improve patient outcomes, advance research, and enhance the practice of pathology but also become readily aware of their intrinsic limitations.
Collapse
Affiliation(s)
- Joshua Pantanowitz
- Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania; Department of Pathology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
| | - Christopher D Manko
- Guthrie Clinic Robert Packer Hospital; Geisinger Commonwealth School of Medicine, Guthrie, Pennsylvania
| | - Liron Pantanowitz
- Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania; Department of Pathology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
| | - Hooman H Rashidi
- Computational Pathology and AI Center of Excellence (CPACE), University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania; Department of Pathology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania.
| |
Collapse
|
13
|
Xing X, Li L, Sun M, Yang J, Zhu X, Peng F, Du J, Feng Y. Deep-learning-based 3D super-resolution CT radiomics model: Predict the possibility of the micropapillary/solid component of lung adenocarcinoma. Heliyon 2024; 10:e34163. [PMID: 39071606 PMCID: PMC11279278 DOI: 10.1016/j.heliyon.2024.e34163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 07/03/2024] [Accepted: 07/04/2024] [Indexed: 07/30/2024] Open
Abstract
Objective Invasive lung adenocarcinoma(ILA) with micropapillary (MPP)/solid (SOL) components has a poor prognosis. Preoperative identification is essential for decision-making for subsequent treatment. This study aims to construct and evaluate a super-resolution(SR) enhanced radiomics model designed to predict the presence of MPP/SOL components preoperatively to provide more accurate and individualized treatment planning. Methods Between March 2018 and November 2023, patients who underwent curative intent ILA resection were included in the study. We implemented a deep transfer learning network on CT images to improve their resolution, resulting in the acquisition of preoperative super-resolution CT (SR-CT) images. Models were developed using radiomic features extracted from CT and SR-CT images. These models employed a range of classifiers, including Logistic Regression (LR), Support Vector Machines (SVM), k-Nearest Neighbors (KNN), Random Forest, Extra Trees, Extreme Gradient Boosting (XGBoost), Light Gradient Boosting Machine (LightGBM), and Multilayer Perceptron (MLP). The diagnostic performance of the models was assessed by measuring the area under the curve (AUC). Result A total of 245 patients were recruited, of which 109 (44.5 %) were diagnosed with ILA with MPP/SOL components. In the analysis of CT images, the SVM model exhibited outstanding effectiveness, recording AUC scores of 0.864 in the training group and 0.761 in the testing group. When this SVM approach was used to develop a radiomics model with SR-CT images, it recorded AUCs of 0.904 in the training and 0.819 in the test cohorts. The calibration curves indicated a high goodness of fit, while decision curve analysis (DCA) highlighted the model's clinical utility. Conclusion The study successfully constructed and evaluated a deep learning(DL)-enhanced SR-CT radiomics model. This model outperformed conventional CT radiomics models in predicting MPP/SOL patterns in ILA. Continued research and broader validation are necessary to fully harness and refine the clinical potential of radiomics when combined with SR reconstruction technology.
Collapse
Affiliation(s)
- Xiaowei Xing
- Cancer Center, Department of Radiology, Zhejiang Provincial People's Hospital, (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, China
| | - Liangping Li
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Mingxia Sun
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Jiahu Yang
- Department of Radiology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Xinhai Zhu
- Department of Thoracic Surgery, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Fang Peng
- Department of Pathology, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Jianzong Du
- Department of Respiratory Medicine, Zhejiang Hospital, Hangzhou, Zhejiang, China
| | - Yue Feng
- Cancer Center, Department of Radiology, Zhejiang Provincial People's Hospital, (Affiliated People's Hospital), Hangzhou Medical College, Hangzhou, Zhejiang, China
| |
Collapse
|
14
|
Walston SL, Tatekawa H, Takita H, Miki Y, Ueda D. Evaluating Biases and Quality Issues in Intermodality Image Translation Studies for Neuroradiology: A Systematic Review. AJNR Am J Neuroradiol 2024; 45:826-832. [PMID: 38663993 PMCID: PMC11288590 DOI: 10.3174/ajnr.a8211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 01/27/2024] [Indexed: 06/09/2024]
Abstract
BACKGROUND Intermodality image-to-image translation is an artificial intelligence technique for generating one technique from another. PURPOSE This review was designed to systematically identify and quantify biases and quality issues preventing validation and clinical application of artificial intelligence models for intermodality image-to-image translation of brain imaging. DATA SOURCES PubMed, Scopus, and IEEE Xplore were searched through August 2, 2023, for artificial intelligence-based image translation models of radiologic brain images. STUDY SELECTION This review collected 102 works published between April 2017 and August 2023. DATA ANALYSIS Eligible studies were evaluated for quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and for bias using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). Medically-focused article adherence was compared with that of engineering-focused articles overall with the Mann-Whitney U test and for each criterion using the Fisher exact test. DATA SYNTHESIS Median adherence to the relevant CLAIM criteria was 69% and 38% for PROBAST questions. CLAIM adherence was lower for engineering-focused articles compared with medically-focused articles (65% versus 73%, P < .001). Engineering-focused studies had higher adherence for model description criteria, and medically-focused studies had higher adherence for data set and evaluation descriptions. LIMITATIONS Our review is limited by the study design and model heterogeneity. CONCLUSIONS Nearly all studies revealed critical issues preventing clinical application, with engineering-focused studies showing higher adherence for the technical model description but significantly lower overall adherence than medically-focused studies. The pursuit of clinical application requires collaboration from both fields to improve reporting.
Collapse
Affiliation(s)
- Shannon L Walston
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hiroyuki Tatekawa
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hirotaka Takita
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yukio Miki
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
- Smart Life Science Lab (D.U.), Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
| |
Collapse
|
15
|
Lee MD, Jain R. Harnessing generative AI for glioma diagnosis: A step forward in neuro-oncologic imaging. Neuro Oncol 2024; 26:1136-1137. [PMID: 38442275 PMCID: PMC11145459 DOI: 10.1093/neuonc/noae043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Indexed: 03/07/2024] Open
Affiliation(s)
- Matthew D Lee
- Department of Radiology, NYU Grossman School of Medicine, New York, New York, USA
| | - Rajan Jain
- Department of Radiology, NYU Grossman School of Medicine, New York, New York, USA
- Department of Neurosurgery, NYU Grossman School of Medicine, New York, New York, USA
| |
Collapse
|
16
|
Lee S, Jung JY, Mahatthanatrakul A, Kim JS. Artificial Intelligence in Spinal Imaging and Patient Care: A Review of Recent Advances. Neurospine 2024; 21:474-486. [PMID: 38955525 PMCID: PMC11224760 DOI: 10.14245/ns.2448388.194] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 05/14/2024] [Accepted: 05/23/2024] [Indexed: 07/04/2024] Open
Abstract
Artificial intelligence (AI) is transforming spinal imaging and patient care through automated analysis and enhanced decision-making. This review presents a clinical task-based evaluation, highlighting the specific impact of AI techniques on different aspects of spinal imaging and patient care. We first discuss how AI can potentially improve image quality through techniques like denoising or artifact reduction. We then explore how AI enables efficient quantification of anatomical measurements, spinal curvature parameters, vertebral segmentation, and disc grading. This facilitates objective, accurate interpretation and diagnosis. AI models now reliably detect key spinal pathologies, achieving expert-level performance in tasks like identifying fractures, stenosis, infections, and tumors. Beyond diagnosis, AI also assists surgical planning via synthetic computed tomography generation, augmented reality systems, and robotic guidance. Furthermore, AI image analysis combined with clinical data enables personalized predictions to guide treatment decisions, such as forecasting spine surgery outcomes. However, challenges still need to be addressed in implementing AI clinically, including model interpretability, generalizability, and data limitations. Multicenter collaboration using large, diverse datasets is critical to advance the field further. While adoption barriers persist, AI presents a transformative opportunity to revolutionize spinal imaging workflows, empowering clinicians to translate data into actionable insights for improved patient care.
Collapse
Affiliation(s)
- Sungwon Lee
- Department of Radiology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
- Visual Analysis and Learning for Improved Diagnostics (VALID) Lab, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Joon-Yong Jung
- Department of Radiology, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
- Visual Analysis and Learning for Improved Diagnostics (VALID) Lab, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| | - Akaworn Mahatthanatrakul
- Department of Orthopaedics, Faculty of Medicine, Naresuan University Hospital, Phitsanulok, Thailand
| | - Jin-Sung Kim
- Spine Center, Department of Neurosurgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Korea
| |
Collapse
|
17
|
Khosravi B, Li F, Dapamede T, Rouzrokh P, Gamble CU, Trivedi HM, Wyles CC, Sellergren AB, Purkayastha S, Erickson BJ, Gichoya JW. Synthetically enhanced: unveiling synthetic data's potential in medical imaging research. EBioMedicine 2024; 104:105174. [PMID: 38821021 PMCID: PMC11177083 DOI: 10.1016/j.ebiom.2024.105174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 05/13/2024] [Accepted: 05/15/2024] [Indexed: 06/02/2024] Open
Abstract
BACKGROUND Chest X-rays (CXR) are essential for diagnosing a variety of conditions, but when used on new populations, model generalizability issues limit their efficacy. Generative AI, particularly denoising diffusion probabilistic models (DDPMs), offers a promising approach to generating synthetic images, enhancing dataset diversity. This study investigates the impact of synthetic data supplementation on the performance and generalizability of medical imaging research. METHODS The study employed DDPMs to create synthetic CXRs conditioned on demographic and pathological characteristics from the CheXpert dataset. These synthetic images were used to supplement training datasets for pathology classifiers, with the aim of improving their performance. The evaluation involved three datasets (CheXpert, MIMIC-CXR, and Emory Chest X-ray) and various experiments, including supplementing real data with synthetic data, training with purely synthetic data, and mixing synthetic data with external datasets. Performance was assessed using the area under the receiver operating curve (AUROC). FINDINGS Adding synthetic data to real datasets resulted in a notable increase in AUROC values (up to 0.02 in internal and external test sets with 1000% supplementation, p-value <0.01 in all instances). When classifiers were trained exclusively on synthetic data, they achieved performance levels comparable to those trained on real data with 200%-300% data supplementation. The combination of real and synthetic data from different sources demonstrated enhanced model generalizability, increasing model AUROC from 0.76 to 0.80 on the internal test set (p-value <0.01). INTERPRETATION Synthetic data supplementation significantly improves the performance and generalizability of pathology classifiers in medical imaging. FUNDING Dr. Gichoya is a 2022 Robert Wood Johnson Foundation Harold Amos Medical Faculty Development Program and declares support from RSNA Health Disparities grant (#EIHD2204), Lacuna Fund (#67), Gordon and Betty Moore Foundation, NIH (NIBIB) MIDRC grant under contracts 75N92020C00008 and 75N92020C00021, and NHLBI Award Number R01HL167811.
Collapse
Affiliation(s)
- Bardia Khosravi
- Department of Radiology, Mayo Clinic, Rochester, MN, USA; Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Frank Li
- Department of Radiology, Emory University, Atlanta, GA, USA
| | - Theo Dapamede
- Department of Radiology, Emory University, Atlanta, GA, USA
| | - Pouria Rouzrokh
- Department of Radiology, Mayo Clinic, Rochester, MN, USA; Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | | | - Hari M Trivedi
- Department of Radiology, Emory University, Atlanta, GA, USA
| | - Cody C Wyles
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, MN, USA
| | | | - Saptarshi Purkayastha
- School of Informatics and Computing, Indiana University-Purdue University, Indianapolis, IN, USA
| | | | - Judy W Gichoya
- Department of Radiology, Emory University, Atlanta, GA, USA.
| |
Collapse
|
18
|
Oeding JF, Yang L, Sanchez-Sotelo J, Camp CL, Karlsson J, Samuelsson K, Pearle AD, Ranawat AS, Kelly BT, Pareek A. A practical guide to the development and deployment of deep learning models for the orthopaedic surgeon: Part III, focus on registry creation, diagnosis, and data privacy. Knee Surg Sports Traumatol Arthrosc 2024; 32:518-528. [PMID: 38426614 DOI: 10.1002/ksa.12085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 01/22/2024] [Accepted: 01/23/2024] [Indexed: 03/02/2024]
Abstract
Deep learning is a subset of artificial intelligence (AI) with enormous potential to transform orthopaedic surgery. As has already become evident with the deployment of Large Language Models (LLMs) like ChatGPT (OpenAI Inc.), deep learning can rapidly enter clinical and surgical practices. As such, it is imperative that orthopaedic surgeons acquire a deeper understanding of the technical terminology, capabilities and limitations associated with deep learning models. The focus of this series thus far has been providing surgeons with an overview of the steps needed to implement a deep learning-based pipeline, emphasizing some of the important technical details for surgeons to understand as they encounter, evaluate or lead deep learning projects. However, this series would be remiss without providing practical examples of how deep learning models have begun to be deployed and highlighting the areas where the authors feel deep learning may have the most profound potential. While computer vision applications of deep learning were the focus of Parts I and II, due to the enormous impact that natural language processing (NLP) has had in recent months, NLP-based deep learning models are also discussed in this final part of the series. In this review, three applications that the authors believe can be impacted the most by deep learning but with which many surgeons may not be familiar are discussed: (1) registry construction, (2) diagnostic AI and (3) data privacy. Deep learning-based registry construction will be essential for the development of more impactful clinical applications, with diagnostic AI being one of those applications likely to augment clinical decision-making in the near future. As the applications of deep learning continue to grow, the protection of patient information will become increasingly essential; as such, applications of deep learning to enhance data privacy are likely to become more important than ever before. Level of Evidence: Level IV.
Collapse
Affiliation(s)
- Jacob F Oeding
- School of Medicine, Mayo Clinic Alix School of Medicine, Rochester, Minnesota, USA
- Department of Orthopaedics, Institute of Clinical Sciences, The Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Linjun Yang
- Orthopedic Surgery Artificial Intelligence Laboratory (OSAIL), Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota, USA
| | | | - Christopher L Camp
- Department of Orthopedic Surgery, Mayo Clinic, Rochester, Minnesota, USA
| | - Jón Karlsson
- Department of Orthopaedics, Sahlgrenska University Hospital, Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
| | - Kristian Samuelsson
- Department of Orthopaedics, Sahlgrenska University Hospital, Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
| | - Andrew D Pearle
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, USA
| | - Anil S Ranawat
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, USA
| | - Bryan T Kelly
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, USA
| | - Ayoosh Pareek
- Sports Medicine and Shoulder Service, Hospital for Special Surgery, New York, New York, USA
| |
Collapse
|
19
|
Zhang M, Ye Z, Yuan E, Lv X, Zhang Y, Tan Y, Xia C, Tang J, Huang J, Li Z. Imaging-based deep learning in kidney diseases: recent progress and future prospects. Insights Imaging 2024; 15:50. [PMID: 38360904 PMCID: PMC10869329 DOI: 10.1186/s13244-024-01636-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 01/27/2024] [Indexed: 02/17/2024] Open
Abstract
Kidney diseases result from various causes, which can generally be divided into neoplastic and non-neoplastic diseases. Deep learning based on medical imaging is an established methodology for further data mining and an evolving field of expertise, which provides the possibility for precise management of kidney diseases. Recently, imaging-based deep learning has been widely applied to many clinical scenarios of kidney diseases including organ segmentation, lesion detection, differential diagnosis, surgical planning, and prognosis prediction, which can provide support for disease diagnosis and management. In this review, we will introduce the basic methodology of imaging-based deep learning and its recent clinical applications in neoplastic and non-neoplastic kidney diseases. Additionally, we further discuss its current challenges and future prospects and conclude that achieving data balance, addressing heterogeneity, and managing data size remain challenges for imaging-based deep learning. Meanwhile, the interpretability of algorithms, ethical risks, and barriers of bias assessment are also issues that require consideration in future development. We hope to provide urologists, nephrologists, and radiologists with clear ideas about imaging-based deep learning and reveal its great potential in clinical practice.Critical relevance statement The wide clinical applications of imaging-based deep learning in kidney diseases can help doctors to diagnose, treat, and manage patients with neoplastic or non-neoplastic renal diseases.Key points• Imaging-based deep learning is widely applied to neoplastic and non-neoplastic renal diseases.• Imaging-based deep learning improves the accuracy of the delineation, diagnosis, and evaluation of kidney diseases.• The small dataset, various lesion sizes, and so on are still challenges for deep learning.
Collapse
Affiliation(s)
- Meng Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
- Medical Equipment Innovation Research Center, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
- Med+X Center for Manufacturing, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Zheng Ye
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Enyu Yuan
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Xinyang Lv
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Yiteng Zhang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Yuqi Tan
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Chunchao Xia
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China
| | - Jing Tang
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
| | - Jin Huang
- Medical Equipment Innovation Research Center, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
- Med+X Center for Manufacturing, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
| | - Zhenlin Li
- Department of Radiology, West China Hospital, Sichuan University, No. 37 Guoxue Alley, Chengdu, 610041, China.
| |
Collapse
|
20
|
Vrettos K, Koltsakis E, Zibis AH, Karantanas AH, Klontzas ME. Generative adversarial networks for spine imaging: A critical review of current applications. Eur J Radiol 2024; 171:111313. [PMID: 38237518 DOI: 10.1016/j.ejrad.2024.111313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2023] [Revised: 12/18/2023] [Accepted: 01/09/2024] [Indexed: 02/10/2024]
Abstract
PURPOSE In recent years, the field of medical imaging has witnessed remarkable advancements, with innovative technologies which revolutionized the visualization and analysis of the human spine. Among the groundbreaking developments in medical imaging, Generative Adversarial Networks (GANs) have emerged as a transformative tool, offering unprecedented possibilities in enhancing spinal imaging techniques and diagnostic outcomes. This review paper aims to provide a comprehensive overview of the use of GANs in spinal imaging, and to emphasize their potential to improve the diagnosis and treatment of spine-related disorders. A specific review focusing on Generative Adversarial Networks (GANs) in the context of medical spine imaging is needed to provide a comprehensive and specialized analysis of the unique challenges, applications, and advancements within this specific domain, which might not be fully addressed in broader reviews covering GANs in general medical imaging. Such a review can offer insights into the tailored solutions and innovations that GANs bring to the field of spinal medical imaging. METHODS An extensive literature search from 2017 until July 2023, was conducted using the most important search engines and identified studies that used GANs in spinal imaging. RESULTS The implementations include generating fat suppressed T2-weighted (fsT2W) images from T1 and T2-weighted sequences, to reduce scan time. The generated images had a significantly better image quality than true fsT2W images and could improve diagnostic accuracy for certain pathologies. GANs were also utilized in generating virtual thin-slice images of intervertebral spaces, creating digital twins of human vertebrae, and predicting fracture response. Lastly, they could be applied to convert CT to MRI images, with the potential to generate near-MR images from CT without MRI. CONCLUSIONS GANs have promising applications in personalized medicine, image augmentation, and improved diagnostic accuracy. However, limitations such as small databases and misalignment in CT-MRI pairs, must be considered.
Collapse
Affiliation(s)
- Konstantinos Vrettos
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece
| | - Emmanouil Koltsakis
- Department of Radiology, Karolinska University Hospital, Solna, Stockholm, Sweden
| | - Aristeidis H Zibis
- Department of Anatomy, Medical School, University of Thessaly, Larissa, Greece
| | - Apostolos H Karantanas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece
| | - Michail E Klontzas
- Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Heraklion, Greece; Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece; Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece.
| |
Collapse
|
21
|
Krishnan AR, Xu K, Li T, Gao C, Remedios LW, Kanakaraj P, Lee HH, Bao S, Sandler KL, Maldonado F, Išgum I, Landman BA. Inter-vendor harmonization of CT reconstruction kernels using unpaired image translation. PROCEEDINGS OF SPIE--THE INTERNATIONAL SOCIETY FOR OPTICAL ENGINEERING 2024; 12926:129261D. [PMID: 39268356 PMCID: PMC11392419 DOI: 10.1117/12.3006608] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/15/2024]
Abstract
The reconstruction kernel in computed tomography (CT) generation determines the texture of the image. Consistency in reconstruction kernels is important as the underlying CT texture can impact measurements during quantitative image analysis. Harmonization (i.e., kernel conversion) minimizes differences in measurements due to inconsistent reconstruction kernels. Existing methods investigate harmonization of CT scans in single or multiple manufacturers. However, these methods require paired scans of hard and soft reconstruction kernels that are spatially and anatomically aligned. Additionally, a large number of models need to be trained across different kernel pairs within manufacturers. In this study, we adopt an unpaired image translation approach to investigate harmonization between and across reconstruction kernels from different manufacturers by constructing a multipath cycle generative adversarial network (GAN). We use hard and soft reconstruction kernels from the Siemens and GE vendors from the National Lung Screening Trial dataset. We use 50 scans from each reconstruction kernel and train a multipath cycle GAN. To evaluate the effect of harmonization on the reconstruction kernels, we harmonize 50 scans each from Siemens hard kernel, GE soft kernel and GE hard kernel to a reference Siemens soft kernel (B30f) and evaluate percent emphysema. We fit a linear model by considering the age, smoking status, sex and vendor and perform an analysis of variance (ANOVA) on the emphysema scores. Our approach minimizes differences in emphysema measurement and highlights the impact of age, sex, smoking status and vendor on emphysema quantification.
Collapse
Affiliation(s)
- Aravind R Krishnan
- Department of Electrical and Computer Engineering, Vanderbilt University, TN, USA
| | - Kaiwen Xu
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Thomas Li
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
| | - Chenyu Gao
- Department of Electrical and Computer Engineering, Vanderbilt University, TN, USA
| | - Lucas W Remedios
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | | | - Ho Hin Lee
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Shunxing Bao
- Department of Electrical and Computer Engineering, Vanderbilt University, TN, USA
| | - Kim L Sandler
- Department of Radiology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Fabien Maldonado
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Thoracic Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Ivana Išgum
- Department of Biomedical Engineering and Physics, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, Netherlands
- Department of Radiology and Nuclear Medicine, Amsterdam University Medical Center, University of Amsterdam, Amsterdam, Netherlands
- Informatics Institute, University of Amsterdam, Amsterdam, Netherlands
| | - Bennett A Landman
- Department of Electrical and Computer Engineering, Vanderbilt University, TN, USA
- Department of Computer Science, Vanderbilt University, Nashville, TN, USA
- Department of Biomedical Engineering, Vanderbilt University, Nashville, TN, USA
- Department of Radiology and Radiological Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University Institute of Imaging Science, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
22
|
Kaba E, Vogl TJ. Can We Use Large Language Models for the Use of Contrast Media in Radiology? Acad Radiol 2024; 31:752. [PMID: 38092589 DOI: 10.1016/j.acra.2023.11.034] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 11/20/2023] [Accepted: 11/21/2023] [Indexed: 02/26/2024]
Affiliation(s)
- Esat Kaba
- Department of Radiology, Recep Tayyip Erdogan University, Rize, Turkey (E.K.).
| | - Thomas J Vogl
- Department of Diagnostic and Interventional Radiology, University Hospital Frankfurt, Frankfurt, Germany (T.J.V.)
| |
Collapse
|
23
|
Sorin V, Soffer S, Glicksberg BS, Barash Y, Konen E, Klang E. Adversarial attacks in radiology - A systematic review. Eur J Radiol 2023; 167:111085. [PMID: 37699278 DOI: 10.1016/j.ejrad.2023.111085] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 08/04/2023] [Accepted: 09/04/2023] [Indexed: 09/14/2023]
Abstract
PURPOSE The growing application of deep learning in radiology has raised concerns about cybersecurity, particularly in relation to adversarial attacks. This study aims to systematically review the literature on adversarial attacks in radiology. METHODS We searched for studies on adversarial attacks in radiology published up to April 2023, using MEDLINE and Google Scholar databases. RESULTS A total of 22 studies published between March 2018 and April 2023 were included, primarily focused on image classification algorithms. Fourteen studies evaluated white-box attacks, three assessed black-box attacks and five investigated both. Eleven of the 22 studies targeted chest X-ray classification algorithms, while others involved chest CT (6/22), brain MRI (4/22), mammography (2/22), abdominal CT (1/22), hepatic US (1/22), and thyroid US (1/22). Some attacks proved highly effective, reducing the AUC of algorithm performance to 0 and achieving success rates up to 100 %. CONCLUSIONS Adversarial attacks are a growing concern. Although currently the threats are more theoretical than practical, they still represent a potential risk. It is important to be alert to such attacks, reinforce cybersecurity measures, and influence the formulation of ethical and legal guidelines. This will ensure the safe use of deep learning technology in medicine.
Collapse
Affiliation(s)
- Vera Sorin
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat-Gan, Israel; Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel.
| | - Shelly Soffer
- Internal Medicine B, Assuta Medical Center, Ashdod, Israel; Ben-Gurion University of the Negev, Be'er Sheva, Israel
| | - Benjamin S Glicksberg
- Hasso Plattner Institute for Digital Health at Mount Sinai, Department of Genetics and Genomic Sciences, New York, NY, USA; Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Yiftach Barash
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat-Gan, Israel; Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Eli Konen
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat-Gan, Israel; Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel
| | - Eyal Klang
- Department of Diagnostic Imaging, Sheba Medical Center, Ramat-Gan, Israel; Faculty of Medicine, Tel-Aviv University, Tel-Aviv, Israel; Sami Sagol AI Hub, ARC, Sheba Medical Center, Ramat-Gan, Israel
| |
Collapse
|
24
|
Zhang F, Wang L, Zhao J, Zhang X. Medical applications of generative adversarial network: a visualization analysis. Acta Radiol 2023; 64:2757-2767. [PMID: 37603577 DOI: 10.1177/02841851231189035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/23/2023]
Abstract
BACKGROUND Deep learning (DL) is one of the latest approaches to artificial intelligence. As an unsupervised DL method, a generative adversarial network (GAN) can be used to synthesize new data. PURPOSE To explore GAN applications in medicine and point out the significance of its existence for clinical medical research, as well as to provide a visual bibliometric analysis of GAN applications in the medical field in combination with the scientometric software Citespace and statistical analysis methods. MATERIAL AND METHODS PubMed, MEDLINE, Web of Science, and Google Scholar were searched to identify studies of GAN in medical applications between 2017 and 2022. This study was performed and reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Citespace was used to analyze the number of publications, authors, institutions, and keywords of articles related to GAN in medical applications. RESULTS The applications of GAN in medicine are not limited to medical image processing, but will also penetrate wider and more complex fields, or may be applied to clinical medicine. Eligibility criteria were the full texts of peer-reviewed journals reporting the application of GANs in medicine. Research selections included material published in English between 1 January 2017 and 1 December 2022. CONCLUSION GAN has been fully applied to the medical field and will be more deeply and widely used in clinical medicine, especially in the field of privacy protection and medical diagnosis. However, clinical applications of GAN require consideration of ethical and legal issues. GAN-based applications should be well validated by expert radiologists.
Collapse
Affiliation(s)
- Fan Zhang
- Radiology department, Huaihe Hospital of Henan University, Kaifeng, PR China
- Henan Key Laboratory of Big Data Analysis and Processing, Henan University, Kaifeng, PR China
| | - Luyao Wang
- School of Computer and Information Engineering, Henan University, Kaifeng, PR China
| | - Jiayin Zhao
- School of Software, Henan University, Kaifeng, PR China
| | - Xinhong Zhang
- School of Software, Henan University, Kaifeng, PR China
| |
Collapse
|
25
|
Ahmed TM, Kawamoto S, Hruban RH, Fishman EK, Soyer P, Chu LC. A primer on artificial intelligence in pancreatic imaging. Diagn Interv Imaging 2023; 104:435-447. [PMID: 36967355 DOI: 10.1016/j.diii.2023.03.002] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 03/06/2023] [Indexed: 06/18/2023]
Abstract
Artificial Intelligence (AI) is set to transform medical imaging by leveraging the vast data contained in medical images. Deep learning and radiomics are the two main AI methods currently being applied within radiology. Deep learning uses a layered set of self-correcting algorithms to develop a mathematical model that best fits the data. Radiomics converts imaging data into mineable features such as signal intensity, shape, texture, and higher-order features. Both methods have the potential to improve disease detection, characterization, and prognostication. This article reviews the current status of artificial intelligence in pancreatic imaging and critically appraises the quality of existing evidence using the radiomics quality score.
Collapse
Affiliation(s)
- Taha M Ahmed
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Satomi Kawamoto
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Ralph H Hruban
- Sol Goldman Pancreatic Research Center, Department of Pathology, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Elliot K Fishman
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA
| | - Philippe Soyer
- Université Paris Cité, Faculté de Médecine, Department of Radiology, Hôpital Cochin-APHP, 75014, 75006, Paris, France, 7501475006
| | - Linda C Chu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Hospital, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA.
| |
Collapse
|
26
|
Ng CKC. Generative Adversarial Network (Generative Artificial Intelligence) in Pediatric Radiology: A Systematic Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:1372. [PMID: 37628371 PMCID: PMC10453402 DOI: 10.3390/children10081372] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 08/07/2023] [Accepted: 08/09/2023] [Indexed: 08/27/2023]
Abstract
Generative artificial intelligence, especially with regard to the generative adversarial network (GAN), is an important research area in radiology as evidenced by a number of literature reviews on the role of GAN in radiology published in the last few years. However, no review article about GAN in pediatric radiology has been published yet. The purpose of this paper is to systematically review applications of GAN in pediatric radiology, their performances, and methods for their performance evaluation. Electronic databases were used for a literature search on 6 April 2023. Thirty-seven papers met the selection criteria and were included. This review reveals that the GAN can be applied to magnetic resonance imaging, X-ray, computed tomography, ultrasound and positron emission tomography for image translation, segmentation, reconstruction, quality assessment, synthesis and data augmentation, and disease diagnosis. About 80% of the included studies compared their GAN model performances with those of other approaches and indicated that their GAN models outperformed the others by 0.1-158.6%. However, these study findings should be used with caution because of a number of methodological weaknesses. For future GAN studies, more robust methods will be essential for addressing these issues. Otherwise, this would affect the clinical adoption of the GAN-based applications in pediatric radiology and the potential advantages of GAN could not be realized widely.
Collapse
Affiliation(s)
- Curtise K. C. Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia; or ; Tel.: +61-8-9266-7314; Fax: +61-8-9266-2377
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| |
Collapse
|
27
|
Park SH. Use of Generative Artificial Intelligence, Including Large Language Models Such as ChatGPT, in Scientific Publications: Policies of KJR and Prominent Authorities. Korean J Radiol 2023; 24:715-718. [PMID: 37500572 PMCID: PMC10400373 DOI: 10.3348/kjr.2023.0643] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Accepted: 07/10/2023] [Indexed: 07/29/2023] Open
Affiliation(s)
- Seong Ho Park
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
28
|
Branstetter BF. Ease of Acquisition: Deriving Critical Brain Perfusion Information from Conventional MRI Sequences. Radiology 2023; 308:e231631. [PMID: 37581497 DOI: 10.1148/radiol.231631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Affiliation(s)
- Barton F Branstetter
- From the Department of Radiology, University of Pittsburgh School of Medicine, 200 Lothrop St, Pittsburgh, PA 15213
| |
Collapse
|
29
|
Brock KK, Chen SR, Sheth RA, Siewerdsen JH. Imaging in Interventional Radiology: 2043 and Beyond. Radiology 2023; 308:e230146. [PMID: 37462500 PMCID: PMC10374939 DOI: 10.1148/radiol.230146] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 04/27/2023] [Accepted: 04/28/2023] [Indexed: 07/21/2023]
Abstract
Since its inception in the early 20th century, interventional radiology (IR) has evolved tremendously and is now a distinct clinical discipline with its own training pathway. The arsenal of modalities at work in IR includes x-ray radiography and fluoroscopy, CT, MRI, US, and molecular and multimodality imaging within hybrid interventional environments. This article briefly reviews the major developments in imaging technology in IR over the past century, summarizes technologies now representative of the standard of care, and reflects on emerging advances in imaging technology that could shape the field in the century ahead. The role of emergent imaging technologies in enabling high-precision interventions is also briefly reviewed, including image-guided ablative therapies.
Collapse
Affiliation(s)
- Kristy K. Brock
- From the Departments of Imaging Physics (K.K.B., J.H.S.),
Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation
Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400
Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Stephen R. Chen
- From the Departments of Imaging Physics (K.K.B., J.H.S.),
Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation
Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400
Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Rahul A. Sheth
- From the Departments of Imaging Physics (K.K.B., J.H.S.),
Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation
Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400
Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| | - Jeffrey H. Siewerdsen
- From the Departments of Imaging Physics (K.K.B., J.H.S.),
Interventional Radiology (S.R.C., R.A.S.), Neurosurgery (J.H.S.), and Radiation
Physics (J.H.S.), The University of Texas MD Anderson Cancer Center, 1400
Pressler St, FCT14.6050 Pickens Academic Tower, Houston, TX 77030-4000
| |
Collapse
|
30
|
Al Kuwaiti A, Nazer K, Al-Reedy A, Al-Shehri S, Al-Muhanna A, Subbarayalu AV, Al Muhanna D, Al-Muhanna FA. A Review of the Role of Artificial Intelligence in Healthcare. J Pers Med 2023; 13:951. [PMID: 37373940 PMCID: PMC10301994 DOI: 10.3390/jpm13060951] [Citation(s) in RCA: 130] [Impact Index Per Article: 65.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 05/11/2023] [Accepted: 05/12/2023] [Indexed: 06/29/2023] Open
Abstract
Artificial intelligence (AI) applications have transformed healthcare. This study is based on a general literature review uncovering the role of AI in healthcare and focuses on the following key aspects: (i) medical imaging and diagnostics, (ii) virtual patient care, (iii) medical research and drug discovery, (iv) patient engagement and compliance, (v) rehabilitation, and (vi) other administrative applications. The impact of AI is observed in detecting clinical conditions in medical imaging and diagnostic services, controlling the outbreak of coronavirus disease 2019 (COVID-19) with early diagnosis, providing virtual patient care using AI-powered tools, managing electronic health records, augmenting patient engagement and compliance with the treatment plan, reducing the administrative workload of healthcare professionals (HCPs), discovering new drugs and vaccines, spotting medical prescription errors, extensive data storage and analysis, and technology-assisted rehabilitation. Nevertheless, this science pitch meets several technical, ethical, and social challenges, including privacy, safety, the right to decide and try, costs, information and consent, access, and efficacy, while integrating AI into healthcare. The governance of AI applications is crucial for patient safety and accountability and for raising HCPs' belief in enhancing acceptance and boosting significant health consequences. Effective governance is a prerequisite to precisely address regulatory, ethical, and trust issues while advancing the acceptance and implementation of AI. Since COVID-19 hit the global health system, the concept of AI has created a revolution in healthcare, and such an uprising could be another step forward to meet future healthcare needs.
Collapse
Affiliation(s)
- Ahmed Al Kuwaiti
- Department of Dental Education, College of Dentistry, Deanship of Quality and Academic Accreditation, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
| | - Khalid Nazer
- Department of Information and Technology, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
- Health Information Department, King Fahad hospital of the University, Al-Khobar 31952, Saudi Arabia
| | - Abdullah Al-Reedy
- Department of Information and Technology, Family and Community Medicine Department, Family and Community Medicine Centre, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
| | - Shaher Al-Shehri
- Faculty of Medicine, Family and Community Medicine Department, Family and Community Medicine Centre, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
| | - Afnan Al-Muhanna
- Breast Imaging Division, Department of Radiology, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
- Radiology Department, King Fahad hospital of the University, Al-Khobar 31952, Saudi Arabia
| | - Arun Vijay Subbarayalu
- Quality Studies and Research Unit, Vice Deanship of Quality, Deanship of Quality and Academic Accreditation, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
| | - Dhoha Al Muhanna
- NDirectorate of Quality and Patient Safety, Family and Community Medicine Center, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
| | - Fahad A. Al-Muhanna
- Nephrology Division, Department of Internal Medicine, Faculty of Medicine, Imam Abdulrahman Bin Faisal University, Dammam 31441, Saudi Arabia
- Medicine Department, King Fahad hospital of the University, Al-Khobar 31952, Saudi Arabia
| |
Collapse
|
31
|
Yoo SJ, Kim H, Witanto JN, Inui S, Yoon JH, Lee KD, Choi YW, Goo JM, Yoon SH. Generative adversarial network for automatic quantification of Coronavirus disease 2019 pneumonia on chest radiographs. Eur J Radiol 2023; 164:110858. [PMID: 37209462 DOI: 10.1016/j.ejrad.2023.110858] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 04/10/2023] [Accepted: 04/29/2023] [Indexed: 05/22/2023]
Abstract
PURPOSE To develop a generative adversarial network (GAN) to quantify COVID-19 pneumonia on chest radiographs automatically. MATERIALS AND METHODS This retrospective study included 50,000 consecutive non-COVID-19 chest CT scans in 2015-2017 for training. Anteroposterior virtual chest, lung, and pneumonia radiographs were generated from whole, segmented lung, and pneumonia pixels from each CT scan. Two GANs were sequentially trained to generate lung images from radiographs and to generate pneumonia images from lung images. GAN-driven pneumonia extent (pneumonia area/lung area) was expressed from 0% to 100%. We examined the correlation of GAN-driven pneumonia extent with semi-quantitative Brixia X-ray severity score (one dataset, n = 4707) and quantitative CT-driven pneumonia extent (four datasets, n = 54-375), along with analyzing a measurement difference between the GAN and CT extents. Three datasets (n = 243-1481), where unfavorable outcomes (respiratory failure, intensive care unit admission, and death) occurred in 10%, 38%, and 78%, respectively, were used to examine the predictive power of GAN-driven pneumonia extent. RESULTS GAN-driven radiographic pneumonia was correlated with the severity score (0.611) and CT-driven extent (0.640). 95% limits of agreements between GAN and CT-driven extents were -27.1% to 17.4%. GAN-driven pneumonia extent provided odds ratios of 1.05-1.18 per percent for unfavorable outcomes in the three datasets, with areas under the receiver operating characteristic curve (AUCs) of 0.614-0.842. When combined with demographic information only and with both demographic and laboratory information, the prediction models yielded AUCs of 0.643-0.841 and 0.688-0.877, respectively. CONCLUSION The generative adversarial network automatically quantified COVID-19 pneumonia on chest radiographs and identified patients with unfavorable outcomes.
Collapse
Affiliation(s)
- Seung-Jin Yoo
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Hyungjin Kim
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea
| | | | - Shohei Inui
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, Tokyo, Japan; Department of Radiology, Japan Self-Defense Forces Central Hospital, Tokyo, Japan
| | - Jeong-Hwa Yoon
- Institute of Health Policy and Management, Medical Research Center, Seoul National University, Seoul, South Korea
| | - Ki-Deok Lee
- Division of Infectious diseases, Department of Internal Medicine, Myongji Hospital, Goyang, Korea
| | - Yo Won Choi
- Department of Radiology, Hanyang University Medical Center, Hanyang University College of Medicine, Seoul, Republic of Korea
| | - Jin Mo Goo
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea; Institute of Radiation Medicine, Seoul National University Medical Research Center, Seoul, Republic of Korea
| | - Soon Ho Yoon
- Department of Radiology, Seoul National University Hospital, Seoul National College of Medicine, Seoul, Korea; MEDICALIP Co. Ltd., Seoul, Korea
| |
Collapse
|
32
|
Skandarani Y, Jodoin PM, Lalande A. GANs for Medical Image Synthesis: An Empirical Study. J Imaging 2023; 9:69. [PMID: 36976120 PMCID: PMC10055771 DOI: 10.3390/jimaging9030069] [Citation(s) in RCA: 29] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 03/11/2023] [Accepted: 03/14/2023] [Indexed: 03/19/2023] Open
Abstract
Generative adversarial networks (GANs) have become increasingly powerful, generating mind-blowing photorealistic images that mimic the content of datasets they have been trained to replicate. One recurrent theme in medical imaging, is whether GANs can also be as effective at generating workable medical data, as they are for generating realistic RGB images. In this paper, we perform a multi-GAN and multi-application study, to gauge the benefits of GANs in medical imaging. We tested various GAN architectures, from basic DCGAN to more sophisticated style-based GANs, on three medical imaging modalities and organs, namely: cardiac cine-MRI, liver CT, and RGB retina images. GANs were trained on well-known and widely utilized datasets, from which their FID scores were computed, to measure the visual acuity of their generated images. We further tested their usefulness by measuring the segmentation accuracy of a U-Net trained on these generated images and the original data. The results reveal that GANs are far from being equal, as some are ill-suited for medical imaging applications, while others performed much better. The top-performing GANs are capable of generating realistic-looking medical images by FID standards, that can fool trained experts in a visual Turing test and comply to some metrics. However, segmentation results suggest that no GAN is capable of reproducing the full richness of medical datasets.
Collapse
Affiliation(s)
- Youssef Skandarani
- ImViA Laboratory, University of Bourgogne Franche-Comte, 21000 Dijon, France
- CASIS Inc., 21800 Quetigny, France
| | - Pierre-Marc Jodoin
- Department of Computer Science, University of Sherbrooke, Sherbrooke, QC J1K 2R1, Canada
| | - Alain Lalande
- ImViA Laboratory, University of Bourgogne Franche-Comte, 21000 Dijon, France
- Department of Medical Imaging, University Hospital of Dijon, 21079 Dijon, France
| |
Collapse
|
33
|
A practical guide to the development and deployment of deep learning models for the orthopedic surgeon: part II. Knee Surg Sports Traumatol Arthrosc 2023; 31:1635-1643. [PMID: 36773057 DOI: 10.1007/s00167-023-07338-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Accepted: 01/30/2023] [Indexed: 02/12/2023]
Abstract
Deep learning has the potential to be one of the most transformative technologies to impact orthopedic surgery. Substantial innovation in this area has occurred over the past 5 years, but clinically meaningful advancements remain limited by a disconnect between clinical and technical experts. That is, it is likely that few orthopedic surgeons possess both the clinical knowledge necessary to identify orthopedic problems, and the technical knowledge needed to implement deep learning-based solutions. To maximize the utilization of rapidly advancing technologies derived from deep learning models, orthopedic surgeons should understand the steps needed to design, organize, implement, and evaluate a deep learning project and its workflow. Equipping surgeons with this knowledge is the objective of this three-part editorial review. Part I described the processes involved in defining the problem, team building, data acquisition, curation, labeling, and establishing the ground truth. Building on that, this review (Part II) provides guidance on pre-processing and augmenting the data, making use of open-source libraries/toolkits, and selecting the required hardware to implement the pipeline. Special considerations regarding model training and evaluation unique to deep learning models relative to "shallow" machine learning models are also reviewed. Finally, guidance pertaining to the clinical deployment of deep learning models in the real world is provided. As in Part I, the focus is on applications of deep learning for computer vision and imaging.
Collapse
|
34
|
Tejani AS, Elhalawani H, Moy L, Kohli M, Kahn CE. Artificial Intelligence and Radiology Education. Radiol Artif Intell 2023; 5:e220084. [PMID: 36721409 PMCID: PMC9885376 DOI: 10.1148/ryai.220084] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 09/18/2022] [Accepted: 11/02/2022] [Indexed: 06/18/2023]
Abstract
Implementation of artificial intelligence (AI) applications into clinical practice requires AI-savvy radiologists to ensure the safe, ethical, and effective use of these systems for patient care. Increasing demand for AI education reflects recognition of the translation of AI applications from research to clinical practice, with positive trainee attitudes regarding the influence of AI on radiology. However, barriers to AI education, such as limited access to resources, predispose to insufficient preparation for the effective use of AI in practice. In response, national organizations have sponsored formal and self-directed learning courses to provide introductory content on imaging informatics and AI. Foundational courses, such as the National Imaging Informatics Course - Radiology and the Radiological Society of North America Imaging AI Certificate, lay a framework for trainees to explore the creation, deployment, and critical evaluation of AI applications. This report includes additional resources for formal programming courses, video series from leading organizations, and blogs from AI and informatics communities. Furthermore, the scope of "AI and radiology education" includes AI-augmented radiology education, with emphasis on the potential for "precision education" that creates personalized experiences for trainees by accounting for varying learning styles and inconsistent, possibly deficient, clinical case volume. © RSNA, 2022 Keywords: Use of AI in Education, Impact of AI on Education, Artificial Intelligence, Medical Education, Imaging Informatics, Natural Language Processing, Precision Education.
Collapse
|
35
|
Avberšek LK, Repovš G. Deep learning in neuroimaging data analysis: Applications, challenges, and solutions. FRONTIERS IN NEUROIMAGING 2022; 1:981642. [PMID: 37555142 PMCID: PMC10406264 DOI: 10.3389/fnimg.2022.981642] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 10/10/2022] [Indexed: 08/10/2023]
Abstract
Methods for the analysis of neuroimaging data have advanced significantly since the beginning of neuroscience as a scientific discipline. Today, sophisticated statistical procedures allow us to examine complex multivariate patterns, however most of them are still constrained by assuming inherent linearity of neural processes. Here, we discuss a group of machine learning methods, called deep learning, which have drawn much attention in and outside the field of neuroscience in recent years and hold the potential to surpass the mentioned limitations. Firstly, we describe and explain the essential concepts in deep learning: the structure and the computational operations that allow deep models to learn. After that, we move to the most common applications of deep learning in neuroimaging data analysis: prediction of outcome, interpretation of internal representations, generation of synthetic data and segmentation. In the next section we present issues that deep learning poses, which concerns multidimensionality and multimodality of data, overfitting and computational cost, and propose possible solutions. Lastly, we discuss the current reach of DL usage in all the common applications in neuroimaging data analysis, where we consider the promise of multimodality, capability of processing raw data, and advanced visualization strategies. We identify research gaps, such as focusing on a limited number of criterion variables and the lack of a well-defined strategy for choosing architecture and hyperparameters. Furthermore, we talk about the possibility of conducting research with constructs that have been ignored so far or/and moving toward frameworks, such as RDoC, the potential of transfer learning and generation of synthetic data.
Collapse
Affiliation(s)
- Lev Kiar Avberšek
- Department of Psychology, Faculty of Arts, University of Ljubljana, Ljubljana, Slovenia
| | | |
Collapse
|
36
|
Ng CKC. Artificial Intelligence for Radiation Dose Optimization in Pediatric Radiology: A Systematic Review. CHILDREN 2022; 9:children9071044. [PMID: 35884028 PMCID: PMC9320231 DOI: 10.3390/children9071044] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 07/11/2022] [Accepted: 07/11/2022] [Indexed: 01/19/2023]
Abstract
Radiation dose optimization is particularly important in pediatric radiology, as children are more susceptible to potential harmful effects of ionizing radiation. However, only one narrative review about artificial intelligence (AI) for dose optimization in pediatric computed tomography (CT) has been published yet. The purpose of this systematic review is to answer the question “What are the AI techniques and architectures introduced in pediatric radiology for dose optimization, their specific application areas, and performances?” Literature search with use of electronic databases was conducted on 3 June 2022. Sixteen articles that met selection criteria were included. The included studies showed deep convolutional neural network (CNN) was the most common AI technique and architecture used for dose optimization in pediatric radiology. All but three included studies evaluated AI performance in dose optimization of abdomen, chest, head, neck, and pelvis CT; CT angiography; and dual-energy CT through deep learning image reconstruction. Most studies demonstrated that AI could reduce radiation dose by 36–70% without losing diagnostic information. Despite the dominance of commercially available AI models based on deep CNN with promising outcomes, homegrown models could provide comparable performances. Future exploration of AI value for dose optimization in pediatric radiology is necessary due to small sample sizes and narrow scopes (only three modalities, CT, positron emission tomography/magnetic resonance imaging and mobile radiography, and not all examination types covered) of existing studies.
Collapse
Affiliation(s)
- Curtise K. C. Ng
- Curtin Medical School, Curtin University, GPO Box U1987, Perth, WA 6845, Australia; or ; Tel.: +61-8-9266-7314; Fax: +61-8-9266-2377
- Curtin Health Innovation Research Institute (CHIRI), Faculty of Health Sciences, Curtin University, GPO Box U1987, Perth, WA 6845, Australia
| |
Collapse
|
37
|
Artificial Intelligence (Enhanced Super-Resolution Generative Adversarial Network) for Calcium Deblooming in Coronary Computed Tomography Angiography: A Feasibility Study. Diagnostics (Basel) 2022; 12:diagnostics12040991. [PMID: 35454039 PMCID: PMC9027004 DOI: 10.3390/diagnostics12040991] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 04/08/2022] [Accepted: 04/13/2022] [Indexed: 12/22/2022] Open
Abstract
Background: The presence of heavy calcification in the coronary artery always presents a challenge for coronary computed tomography angiography (CCTA) in assessing the degree of coronary stenosis due to blooming artifacts associated with calcified plaques. Our study purpose was to use an advanced artificial intelligence (enhanced super-resolution generative adversarial network [ESRGAN]) model to suppress the blooming artifact in CCTA and determine its effect on improving the diagnostic performance of CCTA in calcified plaques. Methods: A total of 184 calcified plaques from 50 patients who underwent both CCTA and invasive coronary angiography (ICA) were analysed with measurements of coronary lumen on the original CCTA, and three sets of ESRGAN-processed images including ESRGAN-high-resolution (ESRGAN-HR), ESRGAN-average and ESRGAN-median with ICA as the reference method for determining sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV). Results: ESRGAN-processed images improved the specificity and PPV at all three coronary arteries (LAD-left anterior descending, LCx-left circumflex and RCA-right coronary artery) compared to original CCTA with ESRGAN-median resulting in the highest values being 41.0% (95% confidence interval [CI]: 30%, 52.7%) and 26.9% (95% CI: 22.9%, 31.4%) at LAD; 41.7% (95% CI: 22.1%, 63.4%) and 36.4% (95% CI: 28.9%, 44.5%) at LCx; 55% (95% CI: 38.5%, 70.7%) and 47.1% (95% CI: 38.7%, 55.6%) at RCA; while corresponding values for original CCTA were 21.8% (95% CI: 13.2%, 32.6%) and 22.8% (95% CI: 20.8%, 24.9%); 12.5% (95% CI: 2.6%, 32.4%) and 27.6% (95% CI: 24.7%, 30.7%); 17.5% (95% CI: 7.3%, 32.8%) and 32.7% (95% CI: 29.6%, 35.9%) at LAD, LCx and RCA, respectively. There was no significant effect on sensitivity and NPV between the original CCTA and ESRGAN-processed images at all three coronary arteries. The area under the receiver operating characteristic curve was the highest with ESRGAN-median images at the RCA level with values being 0.76 (95% CI: 0.64, 0.89), 0.81 (95% CI: 0.69, 0.93), 0.82 (95% CI: 0.71, 0.94) and 0.86 (95% CI: 0.76, 0.96) corresponding to original CCTA and ESRGAN-HR, average and median images, respectively. Conclusions: This feasibility study shows the potential value of ESRGAN-processed images in improving the diagnostic value of CCTA for patients with calcified plaques.
Collapse
|
38
|
Zhu G, Chen H, Jiang B, Chen F, Xie Y, Wintermark M. Application of Deep Learning to Ischemic and Hemorrhagic Stroke Computed Tomography and Magnetic Resonance Imaging. Semin Ultrasound CT MR 2022; 43:147-152. [PMID: 35339255 DOI: 10.1053/j.sult.2022.02.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Deep Learning (DL) algorithm holds great potential in the field of stroke imaging. It has been applied not only to the "downstream" side such as lesion detection, treatment decision making, and outcome prediction, but also to the "upstream" side for generation and enhancement of stroke imaging. This paper aims to comprehensively overview the common applications of DL to stroke imaging. In the future, more standardized imaging datasets and more extensive studies are needed to establish and validate the role of DL in stroke imaging.
Collapse
Affiliation(s)
- Guangming Zhu
- Department of Radiology, Neuroradiology Section, Stanford University School of Medicine, Stanford, CA
| | - Hui Chen
- Department of Radiology, Neuroradiology Section, Stanford University School of Medicine, Stanford, CA
| | - Bin Jiang
- Department of Radiology, Neuroradiology Section, Stanford University School of Medicine, Stanford, CA
| | - Fei Chen
- Department of Neurology, Xuan Wu hospital, Capital Meidcal University, Beijing, China
| | - Yuan Xie
- Subtle Medical Inc, Menlo Park, CA
| | - Max Wintermark
- Department of Radiology, Neuroradiology Section, Stanford University School of Medicine, Stanford, CA.
| |
Collapse
|
39
|
San José Estépar R. Artificial intelligence in functional imaging of the lung. Br J Radiol 2022; 95:20210527. [PMID: 34890215 PMCID: PMC9153712 DOI: 10.1259/bjr.20210527] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 07/11/2021] [Accepted: 07/28/2021] [Indexed: 12/16/2022] Open
Abstract
Artificial intelligence (AI) is transforming the way we perform advanced imaging. From high-resolution image reconstruction to predicting functional response from clinically acquired data, AI is promising to revolutionize clinical evaluation of lung performance, pushing the boundary in pulmonary functional imaging for patients suffering from respiratory conditions. In this review, we overview the current developments and expound on some of the encouraging new frontiers. We focus on the recent advances in machine learning and deep learning that enable reconstructing images, quantitating, and predicting functional responses of the lung. Finally, we shed light on the potential opportunities and challenges ahead in adopting AI for functional lung imaging in clinical settings.
Collapse
Affiliation(s)
- Raúl San José Estépar
- Applied Chest Imaging Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, United States
| |
Collapse
|
40
|
Generative Adversarial Networks in Brain Imaging: A Narrative Review. J Imaging 2022; 8:jimaging8040083. [PMID: 35448210 PMCID: PMC9028488 DOI: 10.3390/jimaging8040083] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Revised: 03/08/2022] [Accepted: 03/15/2022] [Indexed: 02/04/2023] Open
Abstract
Artificial intelligence (AI) is expected to have a major effect on radiology as it demonstrated remarkable progress in many clinical tasks, mostly regarding the detection, segmentation, classification, monitoring, and prediction of diseases. Generative Adversarial Networks have been proposed as one of the most exciting applications of deep learning in radiology. GANs are a new approach to deep learning that leverages adversarial learning to tackle a wide array of computer vision challenges. Brain radiology was one of the first fields where GANs found their application. In neuroradiology, indeed, GANs open unexplored scenarios, allowing new processes such as image-to-image and cross-modality synthesis, image reconstruction, image segmentation, image synthesis, data augmentation, disease progression models, and brain decoding. In this narrative review, we will provide an introduction to GANs in brain imaging, discussing the clinical potential of GANs, future clinical applications, as well as pitfalls that radiologists should be aware of.
Collapse
|
41
|
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. EYE AND VISION (LONDON, ENGLAND) 2022; 9:6. [PMID: 35109930 PMCID: PMC8808986 DOI: 10.1186/s40662-022-00277-3] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/11/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Recent advances in deep learning techniques have led to improved diagnostic abilities in ophthalmology. A generative adversarial network (GAN), which consists of two competing types of deep neural networks, including a generator and a discriminator, has demonstrated remarkable performance in image synthesis and image-to-image translation. The adoption of GAN for medical imaging is increasing for image generation and translation, but it is not familiar to researchers in the field of ophthalmology. In this work, we present a literature review on the application of GAN in ophthalmology image domains to discuss important contributions and to identify potential future research directions. METHODS We performed a survey on studies using GAN published before June 2021 only, and we introduced various applications of GAN in ophthalmology image domains. The search identified 48 peer-reviewed papers in the final review. The type of GAN used in the analysis, task, imaging domain, and the outcome were collected to verify the usefulness of the GAN. RESULTS In ophthalmology image domains, GAN can perform segmentation, data augmentation, denoising, domain transfer, super-resolution, post-intervention prediction, and feature extraction. GAN techniques have established an extension of datasets and modalities in ophthalmology. GAN has several limitations, such as mode collapse, spatial deformities, unintended changes, and the generation of high-frequency noises and artifacts of checkerboard patterns. CONCLUSIONS The use of GAN has benefited the various tasks in ophthalmology image domains. Based on our observations, the adoption of GAN in ophthalmology is still in a very early stage of clinical validation compared with deep learning classification techniques because several problems need to be overcome for practical use. However, the proper selection of the GAN technique and statistical modeling of ocular imaging will greatly improve the performance of each image analysis. Finally, this survey would enable researchers to access the appropriate GAN technique to maximize the potential of ophthalmology datasets for deep learning research.
Collapse
Affiliation(s)
- Aram You
- School of Architecture, Kumoh National Institute of Technology, Gumi, Gyeongbuk, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea.
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Namil-myeon, Cheongwon-gun, Cheongju, Chungcheongbuk-do, 363-849, South Korea.
| |
Collapse
|