1
|
Kawata N, Iwao Y, Matsuura Y, Higashide T, Okamoto T, Sekiguchi Y, Nagayoshi M, Takiguchi Y, Suzuki T, Haneishi H. Generation of short-term follow-up chest CT images using a latent diffusion model in COVID-19. Jpn J Radiol 2025; 43:622-633. [PMID: 39585556 PMCID: PMC11953082 DOI: 10.1007/s11604-024-01699-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Accepted: 11/02/2024] [Indexed: 11/26/2024]
Abstract
PURPOSE Despite a global decrease in the number of COVID-19 patients, early prediction of the clinical course for optimal patient care remains challenging. Recently, the usefulness of image generation for medical images has been investigated. This study aimed to generate short-term follow-up chest CT images using a latent diffusion model in patients with COVID-19. MATERIALS AND METHODS We retrospectively enrolled 505 patients with COVID-19 for whom the clinical parameters (patient background, clinical symptoms, and blood test results) upon admission were available and chest CT imaging was performed. Subject datasets (n = 505) were allocated for training (n = 403), and the remaining (n = 102) were reserved for evaluation. The image underwent variational autoencoder (VAE) encoding, resulting in latent vectors. The information consisting of initial clinical parameters and radiomic features were formatted as a table data encoder. Initial and follow-up latent vectors and the initial table data encoders were utilized for training the diffusion model. The evaluation data were used to generate prognostic images. Then, similarity of the prognostic images (generated images) and the follow-up images (real images) was evaluated by zero-mean normalized cross-correlation (ZNCC), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM). Visual assessment was also performed using a numerical rating scale. RESULTS Prognostic chest CT images were generated using the diffusion model. Image similarity showed reasonable values of 0.973 ± 0.028 for the ZNCC, 24.48 ± 3.46 for the PSNR, and 0.844 ± 0.075 for the SSIM. Visual evaluation of the images by two pulmonologists and one radiologist yielded a reasonable mean score. CONCLUSIONS The similarity and validity of generated predictive images for the course of COVID-19-associated pneumonia using a diffusion model were reasonable. The generation of prognostic images may suggest potential utility for early prediction of the clinical course in COVID-19-associated pneumonia and other respiratory diseases.
Collapse
Affiliation(s)
- Naoko Kawata
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1, Inohana, Chuo-Ku, Chiba-Shi, Chiba, 260-8677, Japan.
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan.
| | - Yuma Iwao
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoi-Cho, Inage-Ku, Chiba-Shi, Chiba, 263-8522, Japan
- Institute for Quantum Medical Science, National Institutes for Quantum Science and Technology, 4-9-1, Anagawa, Inage-Ku, Chiba-Shi, Chiba, 263-8555, Japan
| | - Yukiko Matsuura
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-Cho, Chuo-Ku, Chiba-Shi, Chiba, 260-0852, Japan
| | - Takashi Higashide
- Department of Radiology, Chiba University Hospital, 1-8-1, Inohana, Chuo-Ku, Chiba-Shi, Chiba, 260-8677, Japan
- Department of Radiology, Japanese Red Cross Narita Hospital, 90-1, Iida-Cho, Narita-Shi, Chiba, 286-8523, Japan
| | - Takayuki Okamoto
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoi-Cho, Inage-Ku, Chiba-Shi, Chiba, 263-8522, Japan
| | - Yuki Sekiguchi
- Graduate School of Science and Engineering, Chiba University, Chiba, 263-8522, Japan
| | - Masaru Nagayoshi
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-Cho, Chuo-Ku, Chiba-Shi, Chiba, 260-0852, Japan
| | - Yasuo Takiguchi
- Department of Respiratory Medicine, Chiba Aoba Municipal Hospital, 1273-2 Aoba-Cho, Chuo-Ku, Chiba-Shi, Chiba, 260-0852, Japan
| | - Takuji Suzuki
- Department of Respirology, Graduate School of Medicine, Chiba University, 1-8-1, Inohana, Chuo-Ku, Chiba-Shi, Chiba, 260-8677, Japan
| | - Hideaki Haneishi
- Center for Frontier Medical Engineering, Chiba University, 1-33, Yayoi-Cho, Inage-Ku, Chiba-Shi, Chiba, 263-8522, Japan
| |
Collapse
|
2
|
Liu Y, Du D, Liu Y, Tu S, Yang W, Han X, Suo S, Liu Q. Subtraction-free artifact-aware digital subtraction angiography image generation for head and neck vessels from motion data. Comput Med Imaging Graph 2025; 121:102512. [PMID: 39983664 DOI: 10.1016/j.compmedimag.2025.102512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2024] [Revised: 01/16/2025] [Accepted: 02/10/2025] [Indexed: 02/23/2025]
Abstract
Digital subtraction angiography (DSA) is an essential diagnostic tool for analyzing and diagnosing vascular diseases. However, DSA imaging techniques based on subtraction are prone to artifacts due to misalignments between mask and contrast images caused by inevitable patient movements, hindering accurate vessel identification and surgical treatment. While various registration-based algorithms aim to correct these misalignments, they often fall short in efficiency and effectiveness. Recent deep learning (DL)-based studies aim to generate synthetic DSA images directly from contrast images, free of subtraction. However, these methods typically require clean, motion-free training data, which is challenging to acquire in clinical settings. As a result, existing DSA images often contain motion-affected artifacts, complicating the development of models for generating artifact-free images. In this work, we propose an innovative Artifact-aware DSA image generation method (AaDSA) that utilizes solely motion data to produce artifact-free DSA images without subtraction. Our method employs a Gradient Field Transformation (GFT)-based technique to create an artifact mask that identifies artifact regions in DSA images with minimal manual annotation. This artifact mask guides the training of the AaDSA model, allowing it to bypass the adverse effects of artifact regions during model training. During inference, the AaDSA model can automatically generate artifact-free DSA images from single contrast images without any human intervention. Experimental results on a real head-and-neck DSA dataset show that our approach significantly outperforms state-of-the-art methods, highlighting its potential for clinical use.
Collapse
Affiliation(s)
- Yunbi Liu
- School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
| | - Dong Du
- School of Mathematics and Statistics, Nanjing University of Science and Technology, Nanjing, China
| | - Yun Liu
- College of Computer Science, Nankai University, Tianjin 300350, China
| | - Shengxian Tu
- Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China
| | - Xiaoguang Han
- School of Science and Engineering (SSE), the Chinese University of Hong Kong, Shenzhen 518172, China.
| | - Shiteng Suo
- Department of Radiology, Renji Hospital, School of Medicine, Shanghai Jiao Tong University, Shanghai 200127, China; Biomedical Instrument Institute, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| | - Qingshan Liu
- School of Computer Science, Nanjing University of Posts and Telecommunications, Nanjing 210023, China.
| |
Collapse
|
3
|
Ueda D, Walston SL, Fujita S, Fushimi Y, Tsuboyama T, Kamagata K, Yamada A, Yanagawa M, Ito R, Fujima N, Kawamura M, Nakaura T, Matsui Y, Tatsugami F, Fujioka T, Nozaki T, Hirata K, Naganawa S. Climate change and artificial intelligence in healthcare: Review and recommendations towards a sustainable future. Diagn Interv Imaging 2024; 105:453-459. [PMID: 38918123 DOI: 10.1016/j.diii.2024.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 06/03/2024] [Accepted: 06/03/2024] [Indexed: 06/27/2024]
Abstract
The rapid advancement of artificial intelligence (AI) in healthcare has revolutionized the industry, offering significant improvements in diagnostic accuracy, efficiency, and patient outcomes. However, the increasing adoption of AI systems also raises concerns about their environmental impact, particularly in the context of climate change. This review explores the intersection of climate change and AI in healthcare, examining the challenges posed by the energy consumption and carbon footprint of AI systems, as well as the potential solutions to mitigate their environmental impact. The review highlights the energy-intensive nature of AI model training and deployment, the contribution of data centers to greenhouse gas emissions, and the generation of electronic waste. To address these challenges, the development of energy-efficient AI models, the adoption of green computing practices, and the integration of renewable energy sources are discussed as potential solutions. The review also emphasizes the role of AI in optimizing healthcare workflows, reducing resource waste, and facilitating sustainable practices such as telemedicine. Furthermore, the importance of policy and governance frameworks, global initiatives, and collaborative efforts in promoting sustainable AI practices in healthcare is explored. The review concludes by outlining best practices for sustainable AI deployment, including eco-design, lifecycle assessment, responsible data management, and continuous monitoring and improvement. As the healthcare industry continues to embrace AI technologies, prioritizing sustainability and environmental responsibility is crucial to ensure that the benefits of AI are realized while actively contributing to the preservation of our planet.
Collapse
Affiliation(s)
- Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Abeno-ku, Osaka 545-8585, Japan; Department of Artificial Intelligence, Graduate School of Medicine, Osaka Metropolitan University, Abeno-ku, Osaka 545-8585, Japan.
| | - Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Abeno-ku, Osaka 545-8585, Japan
| | - Shohei Fujita
- Department of Radiology, University of Tokyo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, Sakyoku, Kyoto 606-8507, Japan
| | - Takahiro Tsuboyama
- Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo 650-0017, Japan
| | - Koji Kamagata
- Department of Radiology, Juntendo University Graduate School of Medicine, Bunkyo-ku, Tokyo 113-8421, Japan
| | - Akira Yamada
- Medical Data Science Course, Shinshu University School of Medicine, Matsumoto, Nagano 390-8621, Japan
| | - Masahiro Yanagawa
- Department of Radiology, Graduate School of Medicine, Osaka University, Suita-city, Osaka 565-0871, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi 466-8550, Japan
| | - Noriyuki Fujima
- Department of Diagnostic and Interventional Radiology, Hokkaido University Hospital, Sapporo, Hokkaido 060-8648, Japan
| | - Mariko Kawamura
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi 466-8550, Japan
| | - Takeshi Nakaura
- Department of Diagnostic Radiology, Kumamoto University Graduate School of Medicine, Chuo-ku, Kumamoto 860-8556, Japan
| | - Yusuke Matsui
- Department of Radiology, Faculty of Medicine, Dentistry and Pharmaceutical Sciences, Okayama University, Kita-ku, Okayama 700-8558, Japan
| | - Fuminari Tatsugami
- Department of Diagnostic Radiology, Hiroshima University, Minami-ku, Hiroshima City, Hiroshima 734-8551, Japan
| | - Tomoyuki Fujioka
- Department of Diagnostic Radiology, Tokyo Medical and Dental University, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Taiki Nozaki
- Department of Radiology, Keio University School of Medicine, Shinjuku-ku, Tokyo 160-8582, Japan
| | - Kenji Hirata
- Department of Diagnostic Imaging, Faculty of Medicine, Hokkaido University, Sapporo, Hokkaido 060-8638, Japan
| | - Shinji Naganawa
- Department of Radiology, Nagoya University Graduate School of Medicine, Nagoya, Aichi 466-8550, Japan
| |
Collapse
|
4
|
Walston SL, Seki H, Takita H, Mitsuyama Y, Sato S, Hagiwara A, Ito R, Hanaoka S, Miki Y, Ueda D. Data set terminology of deep learning in medicine: a historical review and recommendation. Jpn J Radiol 2024; 42:1100-1109. [PMID: 38856878 DOI: 10.1007/s11604-024-01608-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Accepted: 05/31/2024] [Indexed: 06/11/2024]
Abstract
Medicine and deep learning-based artificial intelligence (AI) engineering represent two distinct fields each with decades of published history. The current rapid convergence of deep learning and medicine has led to significant advancements, yet it has also introduced ambiguity regarding data set terms common to both fields, potentially leading to miscommunication and methodological discrepancies. This narrative review aims to give historical context for these terms, accentuate the importance of clarity when these terms are used in medical deep learning contexts, and offer solutions to mitigate misunderstandings by readers from either field. Through an examination of historical documents, including articles, writing guidelines, and textbooks, this review traces the divergent evolution of terms for data sets and their impact. Initially, the discordant interpretations of the word 'validation' in medical and AI contexts are explored. We then show that in the medical field as well, terms traditionally used in the deep learning domain are becoming more common, with the data for creating models referred to as the 'training set', the data for tuning of parameters referred to as the 'validation (or tuning) set', and the data for the evaluation of models as the 'test set'. Additionally, the test sets used for model evaluation are classified into internal (random splitting, cross-validation, and leave-one-out) sets and external (temporal and geographic) sets. This review then identifies often misunderstood terms and proposes pragmatic solutions to mitigate terminological confusion in the field of deep learning in medicine. We support the accurate and standardized description of these data sets and the explicit definition of data set splitting terminologies in each publication. These are crucial methods for demonstrating the robustness and generalizability of deep learning applications in medicine. This review aspires to enhance the precision of communication, thereby fostering more effective and transparent research methodologies in this interdisciplinary field.
Collapse
Affiliation(s)
- Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hiroshi Seki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hirotaka Takita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yasuhito Mitsuyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Shingo Sato
- Sidney Kimmel Cancer Center, Thomas Jefferson University, Philadelphia, PA, USA
| | - Akifumi Hagiwara
- Department of Radiology, Juntendo University School of Medicine, Tokyo, Japan
| | - Rintaro Ito
- Department of Radiology, Nagoya University, Nagoya, Japan
| | - Shouhei Hanaoka
- Department of Radiology, University of Tokyo Hospital, Tokyo, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan.
- Department of Artificial Intelligence, Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan.
- Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan.
| |
Collapse
|
5
|
Duan L, Eulig E, Knaup M, Adamus R, Lell M, Kachelrieß M. Training of a deep learning based digital subtraction angiography method using synthetic data. Med Phys 2024; 51:4793-4810. [PMID: 38353632 DOI: 10.1002/mp.16973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/30/2023] [Accepted: 01/04/2024] [Indexed: 07/10/2024] Open
Abstract
BACKGROUND Digital subtraction angiography (DSA) is a fluoroscopy method primarily used for the diagnosis of cardiovascular diseases (CVDs). Deep learning-based DSA (DDSA) is developed to extract DSA-like images directly from fluoroscopic images, which helps in saving dose while improving image quality. It can also be applied where C-arm or patient motion is present and conventional DSA cannot be applied. However, due to the lack of clinical training data and unavoidable artifacts in DSA targets, current DDSA models still cannot satisfactorily display specific structures, nor can they predict noise-free images. PURPOSE In this study, we propose a strategy for producing abundant synthetic DSA image pairs in which synthetic DSA targets are free of typical artifacts and noise commonly found in conventional DSA targets for DDSA model training. METHODS More than 7,000 forward-projected computed tomography (CT) images and more than 25,000 synthetic vascular projection images were employed to create contrast-enhanced fluoroscopic images and corresponding DSA images, which were utilized as DSA image pairs for training of the DDSA networks. The CT projection images and vascular projection images were generated from eight whole-body CT scans and 1,584 3D vascular skeletons, respectively. All vessel skeletons were generated with stochastic Lindenmayer systems. We trained DDSA models on this synthetic dataset and compared them to the trainings on a clinical DSA dataset, which contains nearly 4,000 fluoroscopic x-ray images obtained from different models of C-arms. RESULTS We evaluated DDSA models on clinical fluoroscopic data of different anatomies, including the leg, abdomen, and heart. The results on leg data showed for different methods that training on synthetic data performed similarly and sometimes outperformed training on clinical data. The results on abdomen and cardiac data demonstrated that models trained on synthetic data were able to extract clearer DSA-like images than conventional DSA and models trained on clinical data. The models trained on synthetic data consistently outperformed their clinical data counterparts, achieving higher scores in the quantitative evaluation of peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) metrics for DDSA images, as well as accuracy, precision, and Dice scores for segmentation of the DDSA images. CONCLUSIONS We proposed an approach to train DDSA networks with synthetic DSA image pairs and extract DSA-like images from contrast-enhanced x-ray images directly. This is a potential tool to aid in diagnosis.
Collapse
Affiliation(s)
- Lizhen Duan
- Division of X-Ray Imaging and Computed Tomography, German Cancer Research Center (DKFZ), Heidelberg, Germany
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences (UCAS), Beijing, China
- Key Laboratory of Optical Engineering, Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu, China
| | - Elias Eulig
- Division of X-Ray Imaging and Computed Tomography, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Faculty of Physics and Astronomy, Heidelberg University, Heidelberg, Germany
| | - Michael Knaup
- Division of X-Ray Imaging and Computed Tomography, German Cancer Research Center (DKFZ), Heidelberg, Germany
| | - Ralf Adamus
- Department of Radiology, Neuroradiology and Nuclear Medicine, Klinikum Nürnberg, Paracelsus Medical University, Nürnberg, Germany
| | - Michael Lell
- Department of Radiology, Neuroradiology and Nuclear Medicine, Klinikum Nürnberg, Paracelsus Medical University, Nürnberg, Germany
| | - Marc Kachelrieß
- Division of X-Ray Imaging and Computed Tomography, German Cancer Research Center (DKFZ), Heidelberg, Germany
- Medical Faculty Heidelberg, Heidelberg University, Heidelberg, Germany
| |
Collapse
|
6
|
Walston SL, Tatekawa H, Takita H, Miki Y, Ueda D. Evaluating Biases and Quality Issues in Intermodality Image Translation Studies for Neuroradiology: A Systematic Review. AJNR Am J Neuroradiol 2024; 45:826-832. [PMID: 38663993 PMCID: PMC11288590 DOI: 10.3174/ajnr.a8211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Accepted: 01/27/2024] [Indexed: 06/09/2024]
Abstract
BACKGROUND Intermodality image-to-image translation is an artificial intelligence technique for generating one technique from another. PURPOSE This review was designed to systematically identify and quantify biases and quality issues preventing validation and clinical application of artificial intelligence models for intermodality image-to-image translation of brain imaging. DATA SOURCES PubMed, Scopus, and IEEE Xplore were searched through August 2, 2023, for artificial intelligence-based image translation models of radiologic brain images. STUDY SELECTION This review collected 102 works published between April 2017 and August 2023. DATA ANALYSIS Eligible studies were evaluated for quality using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM) and for bias using the Prediction model Risk Of Bias ASsessment Tool (PROBAST). Medically-focused article adherence was compared with that of engineering-focused articles overall with the Mann-Whitney U test and for each criterion using the Fisher exact test. DATA SYNTHESIS Median adherence to the relevant CLAIM criteria was 69% and 38% for PROBAST questions. CLAIM adherence was lower for engineering-focused articles compared with medically-focused articles (65% versus 73%, P < .001). Engineering-focused studies had higher adherence for model description criteria, and medically-focused studies had higher adherence for data set and evaluation descriptions. LIMITATIONS Our review is limited by the study design and model heterogeneity. CONCLUSIONS Nearly all studies revealed critical issues preventing clinical application, with engineering-focused studies showing higher adherence for the technical model description but significantly lower overall adherence than medically-focused studies. The pursuit of clinical application requires collaboration from both fields to improve reporting.
Collapse
Affiliation(s)
- Shannon L Walston
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hiroyuki Tatekawa
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Hirotaka Takita
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Yukio Miki
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
| | - Daiju Ueda
- From the Department of Diagnostic and Interventional Radiology (S.L.W., H.Tatekawa, H.Takita, Y.M., D.U.), Graduate School of Medicine, Osaka Metropolitan University, Osaka, Japan
- Smart Life Science Lab (D.U.), Center for Health Science Innovation, Osaka Metropolitan University, Osaka, Japan
| |
Collapse
|
7
|
Tatekawa H, Ueda D, Takita H, Matsumoto T, Walston SL, Mitsuyama Y, Horiuchi D, Matsushita S, Oura T, Tomita Y, Tsukamoto T, Shimono T, Miki Y. Deep learning-based diffusion tensor image generation model: a proof-of-concept study. Sci Rep 2024; 14:2911. [PMID: 38316892 PMCID: PMC10844503 DOI: 10.1038/s41598-024-53278-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Accepted: 01/29/2024] [Indexed: 02/07/2024] Open
Abstract
This study created an image-to-image translation model that synthesizes diffusion tensor images (DTI) from conventional diffusion weighted images, and validated the similarities between the original and synthetic DTI. Thirty-two healthy volunteers were prospectively recruited. DTI and DWI were obtained with six and three directions of the motion probing gradient (MPG), respectively. The identical imaging plane was paired for the image-to-image translation model that synthesized one direction of the MPG from DWI. This process was repeated six times in the respective MPG directions. Regions of interest (ROIs) in the lentiform nucleus, thalamus, posterior limb of the internal capsule, posterior thalamic radiation, and splenium of the corpus callosum were created and applied to maps derived from the original and synthetic DTI. The mean values and signal-to-noise ratio (SNR) of the original and synthetic maps for each ROI were compared. The Bland-Altman plot between the original and synthetic data was evaluated. Although the test dataset showed a larger standard deviation of all values and lower SNR in the synthetic data than in the original data, the Bland-Altman plots showed each plot localizing in a similar distribution. Synthetic DTI could be generated from conventional DWI with an image-to-image translation model.
Collapse
Affiliation(s)
- Hiroyuki Tatekawa
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan.
| | - Daiju Ueda
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Hirotaka Takita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Toshimasa Matsumoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Shannon L Walston
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Yasuhito Mitsuyama
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Daisuke Horiuchi
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Shu Matsushita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Tatsushi Oura
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Yuichiro Tomita
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Taro Tsukamoto
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Taro Shimono
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| | - Yukio Miki
- Department of Diagnostic and Interventional Radiology, Graduate School of Medicine, Osaka Metropolitan University, 1-4-3, Asahi-Machi, Abeno-Ku, Osaka, 545-8585, Japan
| |
Collapse
|
8
|
Cantrell DR, Cho L, Zhou C, Faruqui SHA, Potts MB, Jahromi BS, Abdalla R, Shaibani A, Ansari SA. Background Subtraction Angiography with Deep Learning Using Multi-frame Spatiotemporal Angiographic Input. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:134-144. [PMID: 38343209 PMCID: PMC10980661 DOI: 10.1007/s10278-023-00921-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/01/2023] [Revised: 09/29/2023] [Accepted: 10/23/2023] [Indexed: 03/02/2024]
Abstract
Catheter Digital Subtraction Angiography (DSA) is markedly degraded by all voluntary, respiratory, or cardiac motion artifact that occurs during the exam acquisition. Prior efforts directed toward improving DSA images with machine learning have focused on extracting vessels from individual, isolated 2D angiographic frames. In this work, we introduce improved 2D + t deep learning models that leverage the rich temporal information in angiographic timeseries. A total of 516 cerebral angiograms were collected with 8784 individual series. We utilized feature-based computer vision algorithms to separate the database into "motionless" and "motion-degraded" subsets. Motion measured from the "motion degraded" category was then used to create a realistic, but synthetic, motion-augmented dataset suitable for training 2D U-Net, 3D U-Net, SegResNet, and UNETR models. Quantitative results on a hold-out test set demonstrate that the 3D U-Net outperforms competing 2D U-Net architectures, with substantially reduced motion artifacts when compared to DSA. In comparison to single-frame 2D U-Net, the 3D U-Net utilizing 16 input frames achieves a reduced RMSE (35.77 ± 15.02 vs 23.14 ± 9.56, p < 0.0001; mean ± std dev) and an improved Multi-Scale SSIM (0.86 ± 0.08 vs 0.93 ± 0.05, p < 0.0001). The 3D U-Net also performs favorably in comparison to alternative convolutional and transformer-based architectures (U-Net RMSE 23.20 ± 7.55 vs SegResNet 23.99 ± 7.81, p < 0.0001, and UNETR 25.42 ± 7.79, p < 0.0001, mean ± std dev). These results demonstrate that multi-frame temporal information can boost performance of motion-resistant Background Subtraction Deep Learning algorithms, and we have presented a neuroangiography domain-specific synthetic affine motion augmentation pipeline that can be utilized to generate suitable datasets for supervised training of 3D (2d + t) architectures.
Collapse
Affiliation(s)
- Donald R Cantrell
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA.
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA.
- Department of Radiology, Ann and Robert H. Lurie Children's Hospital, Chicago, IL, USA.
| | - Leon Cho
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
| | - Chaochao Zhou
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
| | - Syed H A Faruqui
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
| | - Matthew B Potts
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Babak S Jahromi
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| | - Ramez Abdalla
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
| | - Ali Shaibani
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
- Department of Radiology, Ann and Robert H. Lurie Children's Hospital, Chicago, IL, USA
| | - Sameer A Ansari
- Department of Radiology, Northwestern University Feinberg School of Medicine, 737 N Michigan Ave, Suite 1600, Chicago, IL, 60611, USA
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
- Department of Neurological Surgery, Northwestern University Feinberg School of Medicine, Chicago, IL, USA
| |
Collapse
|
9
|
Gaddum O, Chapiro J. An Interventional Radiologist's Primer of Critical Appraisal of Artificial Intelligence Research. J Vasc Interv Radiol 2024; 35:7-14. [PMID: 37769940 DOI: 10.1016/j.jvir.2023.09.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 07/17/2023] [Accepted: 09/18/2023] [Indexed: 10/03/2023] Open
Abstract
Recent advances in artificial intelligence (AI) are expected to cause a significant paradigm shift in all digital data-driven aspects of information gain, processing, and decision making in both clinical healthcare and medical research. The field of interventional radiology (IR) will be enmeshed in this innovation, yet the collective IR expertise in the field of AI remains rudimentary because of lack of training. This primer provides the clinical interventional radiologist with a simple guide for critically appraising AI research and products by identifying 12 fundamental items that should be considered: (a) need for AI technology to address the clinical problem, (b) type of applied Al algorithm, (c) data quality and degree of annotation, (d) reporting of accuracy, (e) applicability of standardized reporting, (f) reproducibility of methodology and data transparency, (g) algorithm validation, (h) interpretability, (i) concrete impact on IR, (j) pathway toward translation to clinical practice, (k) clinical benefit and cost-effectiveness, and (l) regulatory framework.
Collapse
Affiliation(s)
- Olivia Gaddum
- Division of Interventional Radiology, Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut
| | - Julius Chapiro
- Division of Interventional Radiology, Department of Radiology and Biomedical Imaging, Yale University School of Medicine, New Haven, Connecticut.
| |
Collapse
|
10
|
Crabb BT, Hamrick F, Richards T, Eiswirth P, Noo F, Hsiao A, Fine GC. Deep Learning Subtraction Angiography: Improved Generalizability with Transfer Learning. J Vasc Interv Radiol 2023; 34:409-419.e2. [PMID: 36529442 DOI: 10.1016/j.jvir.2022.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 10/27/2022] [Accepted: 12/02/2022] [Indexed: 12/23/2022] Open
Abstract
PURPOSE To investigate the utility and generalizability of deep learning subtraction angiography (DLSA) for generating synthetic digital subtraction angiography (DSA) images without misalignment artifacts. MATERIALS AND METHODS DSA images and native digital angiograms of the cerebral, hepatic, and splenic vasculature, both with and without motion artifacts, were retrospectively collected. Images were divided into a motion-free training set (n = 66 patients, 9,161 images) and a motion artifact-containing test set (n = 22 patients, 3,322 images). Using the motion-free set, the deep neural network pix2pix was trained to produce synthetic DSA images without misalignment artifacts directly from native digital angiograms. After training, the algorithm was tested on digital angiograms of hepatic and splenic vasculature with substantial motion. Four board-certified radiologists evaluated performance via visual assessment using a 5-grade Likert scale. Subgroup analyses were performed to analyze the impact of transfer learning and generalizability to novel vasculature. RESULTS Compared with the traditional DSA method, the proposed approach was found to generate synthetic DSA images with significantly fewer background artifacts (a mean rating of 1.9 [95% CI, 1.1-2.6] vs 3.5 [3.5-4.4]; P = .01) without a significant difference in foreground vascular detail (mean rating of 3.1 [2.6-3.5] vs 3.3 [2.8-3.8], P = .19) in both the hepatic and splenic vasculature. Transfer learning significantly improved the quality of generated images (P < .001). CONCLUSIONS DLSA successfully generates synthetic angiograms without misalignment artifacts, is improved through transfer learning, and generalizes reliably to novel vasculature that was not included in the training data.
Collapse
Affiliation(s)
- Brendan T Crabb
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, Utah.
| | - Forrest Hamrick
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, Utah
| | - Tyler Richards
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, Utah
| | - Preston Eiswirth
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, Utah
| | - Frederic Noo
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, Utah
| | - Albert Hsiao
- Department of Radiology, University of California San Diego, San Diego, California
| | - Gabriel C Fine
- Department of Radiology and Imaging Sciences, University of Utah School of Medicine, Salt Lake City, Utah
| |
Collapse
|