1
|
Momin E, Cook T, Gershon G, Barr J, De Cecco CN, van Assen M. Systematic review on the impact of deep learning-driven worklist triage on radiology workflow and clinical outcomes. Eur Radiol 2025:10.1007/s00330-025-11674-2. [PMID: 40397031 DOI: 10.1007/s00330-025-11674-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2025] [Revised: 03/26/2025] [Accepted: 04/14/2025] [Indexed: 05/22/2025]
Abstract
OBJECTIVES To perform a systematic review on the impact of deep learning (DL)-based triage for reducing diagnostic delays and improving patient outcomes in peer-reviewed and pre-print publications. MATERIALS AND METHODS A search was conducted of primary research studies focused on DL-based worklist optimization for diagnostic imaging triage published on multiple databases from January 2018 until July 2024. Extracted data included study design, dataset characteristics, workflow metrics including report turnaround time and time-to-treatment, and patient outcome differences. Further analysis between clinical settings and integration modality was investigated using nonparametric statistics. Risk of bias was assessed with the risk of bias in non-randomized studies-of interventions (ROBINS-I) checklist. RESULTS A total of 38 studies from 20 publications, involving 138,423 images, were analyzed. Workflow interventions concerned pulmonary embolism (n = 8), stroke (n = 3), intracranial hemorrhage (n = 12), and chest conditions (n = 15). Patients in the post DL-triage group had shorter median report turnaround times: a mean difference of 12.3 min (IQR: -25.7, -7.6) for pulmonary embolism, 20.5 min (IQR: -32.1, -9.3) for stroke, 4.3 min (IQR: -8.6, 1.3) for intracranial hemorrhage and 29.7 min (IQR: -2947.7, -18.3) for chest diseases. Sub-group analysis revealed that reductions varied per clinical environment and relative prevalence rates but were the highest when algorithms actively stratified and reordered the radiological worklist, with reductions of -43.7% in report turnaround time compared to -7.6% from widget-based systems (p < 0.01). CONCLUSION DL-based triage systems had comparable report turnaround time improvements, especially in outpatient and high-prevalence settings, suggesting that AI-based triage holds promise in alleviating radiology workloads. KEY POINTS Question Can DL-based triage address lengthening imaging report turnaround times and improve patient outcomes across distinct clinical environments? Findings DL-based triage improved report turnaround time across disease groups, with higher reductions reported in high-prevalence or lower acuity settings. Clinical relevance DL-based workflow prioritization is a reliable tool for reducing diagnostic imaging delay for time-sensitive disease across clinical settings. However, further research and reliable metrics are needed to provide specific recommendations with regards to false-negative examinations and multi-condition prioritization.
Collapse
Affiliation(s)
- Eshan Momin
- Department of Radiology and Imaging Sciences, Emory University Hospital, Atlanta, GA, USA
| | - Tessa Cook
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Gabrielle Gershon
- Department of Radiology and Imaging Sciences, Emory University Hospital, Atlanta, GA, USA
| | - Jaret Barr
- Department of Radiology and Imaging Sciences, Emory University Hospital, Atlanta, GA, USA
| | - Carlo N De Cecco
- Department of Radiology and Imaging Sciences, Emory University Hospital, Atlanta, GA, USA
| | - Marly van Assen
- Department of Radiology and Imaging Sciences, Emory University Hospital, Atlanta, GA, USA.
| |
Collapse
|
2
|
Bercea CI, Wiestler B, Rueckert D, Schnabel JA. Evaluating normative representation learning in generative AI for robust anomaly detection in brain imaging. Nat Commun 2025; 16:1624. [PMID: 39948337 PMCID: PMC11825664 DOI: 10.1038/s41467-025-56321-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Accepted: 01/15/2025] [Indexed: 02/16/2025] Open
Abstract
Normative representation learning focuses on understanding the typical anatomical distributions from large datasets of medical scans from healthy individuals. Generative Artificial Intelligence (AI) leverages this attribute to synthesize images that accurately reflect these normative patterns. This capability enables the AI allowing them to effectively detect and correct anomalies in new, unseen pathological data without the need for expert labeling. Traditional anomaly detection methods often evaluate the anomaly detection performance, overlooking the crucial role of normative learning. In our analysis, we introduce novel metrics, specifically designed to evaluate this facet in AI models. We apply these metrics across various generative AI frameworks, including advanced diffusion models, and rigorously test them against complex and diverse brain pathologies. In addition, we conduct a large multi-reader study to compare these metrics to experts' evaluations. Our analysis demonstrates that models proficient in normative learning exhibit exceptional versatility, adeptly detecting a wide range of unseen medical conditions. Our code is available at https://github.com/compai-lab/2024-ncomms-bercea.git .
Collapse
Affiliation(s)
- Cosmin I Bercea
- Chair of Computational Imaging and AI in Medicine, Technical University of Munich (TUM), Munich, Germany.
- Helmholtz AI and Helmholtz Center Munich, Munich, Germany.
| | - Benedikt Wiestler
- Chair of AI for Image-Guided Diagnosis and Therapy, TUM School of Medicine and Health, Munich, Germany
- Munich Center for Machine Learning (MCML), Munich, Germany
| | - Daniel Rueckert
- Munich Center for Machine Learning (MCML), Munich, Germany
- Chair of AI in Healthcare and Medicine, Technical University of Munich (TUM) and TUM University Hospital, Munich, Germany
- Department of Computing, Imperial College London, London, UK
| | - Julia A Schnabel
- Chair of Computational Imaging and AI in Medicine, Technical University of Munich (TUM), Munich, Germany
- Helmholtz AI and Helmholtz Center Munich, Munich, Germany
- Munich Center for Machine Learning (MCML), Munich, Germany
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, UK
| |
Collapse
|
3
|
Jung HK, Kim K, Park JE, Kim N. Image-Based Generative Artificial Intelligence in Radiology: Comprehensive Updates. Korean J Radiol 2024; 25:959-981. [PMID: 39473088 PMCID: PMC11524689 DOI: 10.3348/kjr.2024.0392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 08/29/2024] [Accepted: 08/29/2024] [Indexed: 11/02/2024] Open
Abstract
Generative artificial intelligence (AI) has been applied to images for image quality enhancement, domain transfer, and augmentation of training data for AI modeling in various medical fields. Image-generative AI can produce large amounts of unannotated imaging data, which facilitates multiple downstream deep-learning tasks. However, their evaluation methods and clinical utility have not been thoroughly reviewed. This article summarizes commonly used generative adversarial networks and diffusion models. In addition, it summarizes their utility in clinical tasks in the field of radiology, such as direct image utilization, lesion detection, segmentation, and diagnosis. This article aims to guide readers regarding radiology practice and research using image-generative AI by 1) reviewing basic theories of image-generative AI, 2) discussing the methods used to evaluate the generated images, 3) outlining the clinical and research utility of generated images, and 4) discussing the issue of hallucinations.
Collapse
Affiliation(s)
- Ha Kyung Jung
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Kiduk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Ji Eun Park
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| |
Collapse
|
4
|
Yoon JT, Lee KM, Oh JH, Kim HG, Jeong JW. Insights and Considerations in Development and Performance Evaluation of Generative Adversarial Networks (GANs): What Radiologists Need to Know. Diagnostics (Basel) 2024; 14:1756. [PMID: 39202244 PMCID: PMC11353572 DOI: 10.3390/diagnostics14161756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Accepted: 08/05/2024] [Indexed: 09/03/2024] Open
Abstract
The rapid development of deep learning in medical imaging has significantly enhanced the capabilities of artificial intelligence while simultaneously introducing challenges, including the need for vast amounts of training data and the labor-intensive tasks of labeling and segmentation. Generative adversarial networks (GANs) have emerged as a solution, offering synthetic image generation for data augmentation and streamlining medical image processing tasks through models such as cGAN, CycleGAN, and StyleGAN. These innovations not only improve the efficiency of image augmentation, reconstruction, and segmentation, but also pave the way for unsupervised anomaly detection, markedly reducing the reliance on labeled datasets. Our investigation into GANs in medical imaging addresses their varied architectures, the considerations for selecting appropriate GAN models, and the nuances of model training and performance evaluation. This paper aims to provide radiologists who are new to GAN technology with a thorough understanding, guiding them through the practical application and evaluation of GANs in brain imaging with two illustrative examples using CycleGAN and pixel2style2pixel (pSp)-combined StyleGAN. It offers a comprehensive exploration of the transformative potential of GANs in medical imaging research. Ultimately, this paper strives to equip radiologists with the knowledge to effectively utilize GANs, encouraging further research and application within the field.
Collapse
Affiliation(s)
- Jeong Taek Yoon
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Kyung Mi Lee
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Jang-Hoon Oh
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Hyug-Gi Kim
- Department of Radiology, Kyung Hee University Hospital, Kyung Hee University College of Medicine, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea; (J.T.Y.); (H.-G.K.)
| | - Ji Won Jeong
- Department of Medicine, Graduate School, Kyung Hee University, 23 Kyungheedae-ro, Dongdaemun-gu, Seoul 02447, Republic of Korea;
| |
Collapse
|
5
|
Kim K, Cho K, Jang R, Kyung S, Lee S, Ham S, Choi E, Hong GS, Kim N. Updated Primer on Generative Artificial Intelligence and Large Language Models in Medical Imaging for Medical Professionals. Korean J Radiol 2024; 25:224-242. [PMID: 38413108 PMCID: PMC10912493 DOI: 10.3348/kjr.2023.0818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Revised: 11/27/2023] [Accepted: 12/28/2023] [Indexed: 02/29/2024] Open
Abstract
The emergence of Chat Generative Pre-trained Transformer (ChatGPT), a chatbot developed by OpenAI, has garnered interest in the application of generative artificial intelligence (AI) models in the medical field. This review summarizes different generative AI models and their potential applications in the field of medicine and explores the evolving landscape of Generative Adversarial Networks and diffusion models since the introduction of generative AI models. These models have made valuable contributions to the field of radiology. Furthermore, this review also explores the significance of synthetic data in addressing privacy concerns and augmenting data diversity and quality within the medical domain, in addition to emphasizing the role of inversion in the investigation of generative models and outlining an approach to replicate this process. We provide an overview of Large Language Models, such as GPTs and bidirectional encoder representations (BERTs), that focus on prominent representatives and discuss recent initiatives involving language-vision models in radiology, including innovative large language and vision assistant for biomedicine (LLaVa-Med), to illustrate their practical application. This comprehensive review offers insights into the wide-ranging applications of generative AI models in clinical research and emphasizes their transformative potential.
Collapse
Affiliation(s)
- Kiduk Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | | | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Soyoung Lee
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sungwon Ham
- Healthcare Readiness Institute for Unified Korea, Korea University Ansan Hospital, Korea University College of Medicine, Ansan, Republic of Korea
| | - Edward Choi
- Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Convergence Medicine, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea
- Department of Radiology and Research Institute of Radiology, University of Ulsan College of Medicine, Asan Medical Center, Seoul, Republic of Korea.
| |
Collapse
|
6
|
Benger M, Wood DA, Kafiabadi S, Al Busaidi A, Guilhem E, Lynch J, Townend M, Montvila A, Siddiqui J, Gadapa N, Barker G, Ourselin S, Cole JH, Booth TC. Factors affecting the labelling accuracy of brain MRI studies relevant for deep learning abnormality detection. FRONTIERS IN RADIOLOGY 2023; 3:1251825. [PMID: 38089643 PMCID: PMC10711054 DOI: 10.3389/fradi.2023.1251825] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/02/2023] [Accepted: 11/02/2023] [Indexed: 02/01/2024]
Abstract
Unlocking the vast potential of deep learning-based computer vision classification systems necessitates large data sets for model training. Natural Language Processing (NLP)-involving automation of dataset labelling-represents a potential avenue to achieve this. However, many aspects of NLP for dataset labelling remain unvalidated. Expert radiologists manually labelled over 5,000 MRI head reports in order to develop a deep learning-based neuroradiology NLP report classifier. Our results demonstrate that binary labels (normal vs. abnormal) showed high rates of accuracy, even when only two MRI sequences (T2-weighted and those based on diffusion weighted imaging) were employed as opposed to all sequences in an examination. Meanwhile, the accuracy of more specific labelling for multiple disease categories was variable and dependent on the category. Finally, resultant model performance was shown to be dependent on the expertise of the original labeller, with worse performance seen with non-expert vs. expert labellers.
Collapse
Affiliation(s)
- Matthew Benger
- Department of Neuroradiology, Kings College Hospital, London, United Kingdom
| | - David A. Wood
- School of Biomedical Engineering & Imaging Sciences, Kings College London, London, United Kingdom
| | - Sina Kafiabadi
- Department of Neuroradiology, Kings College Hospital, London, United Kingdom
| | - Aisha Al Busaidi
- Department of Neuroradiology, Kings College Hospital, London, United Kingdom
| | - Emily Guilhem
- Department of Neuroradiology, Kings College Hospital, London, United Kingdom
| | - Jeremy Lynch
- Department of Neuroradiology, Kings College Hospital, London, United Kingdom
| | - Matthew Townend
- School of Biomedical Engineering & Imaging Sciences, Kings College London, London, United Kingdom
| | - Antanas Montvila
- School of Biomedical Engineering & Imaging Sciences, Kings College London, London, United Kingdom
| | - Juveria Siddiqui
- Department of Neuroradiology, Kings College Hospital, London, United Kingdom
| | - Naveen Gadapa
- Department of Neuroradiology, Kings College Hospital, London, United Kingdom
| | - Gareth Barker
- Institute of Psychiatry, Psychology & Neuroscience, Kings College London, London, United Kingdom
| | - Sebastian Ourselin
- School of Biomedical Engineering & Imaging Sciences, Kings College London, London, United Kingdom
| | - James H. Cole
- Institute of Psychiatry, Psychology & Neuroscience, Kings College London, London, United Kingdom
- Centre for Medical Image Computing, Dementia Research, University College London, London, United Kingdom
| | - Thomas C. Booth
- Department of Neuroradiology, Kings College Hospital, London, United Kingdom
- School of Biomedical Engineering & Imaging Sciences, Kings College London, London, United Kingdom
| |
Collapse
|
7
|
Hong GS, Jang M, Kyung S, Cho K, Jeong J, Lee GY, Shin K, Kim KD, Ryu SM, Seo JB, Lee SM, Kim N. Overcoming the Challenges in the Development and Implementation of Artificial Intelligence in Radiology: A Comprehensive Review of Solutions Beyond Supervised Learning. Korean J Radiol 2023; 24:1061-1080. [PMID: 37724586 PMCID: PMC10613849 DOI: 10.3348/kjr.2023.0393] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 07/01/2023] [Accepted: 07/30/2023] [Indexed: 09/21/2023] Open
Abstract
Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.
Collapse
Affiliation(s)
- Gil-Sun Hong
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Miso Jang
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sunggu Kyung
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Kyungjin Cho
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Biomedical Engineering, Asan Medical Institute of Convergence Science and Technology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Jiheon Jeong
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Grace Yoojin Lee
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Keewon Shin
- Laboratory for Biosignal Analysis and Perioperative Outcome Research, Biomedical Engineering Center, Asan Institute of Lifesciences, Asan Medical Center, Seoul, Republic of Korea
| | - Ki Duk Kim
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Seung Min Ryu
- Department of Orthopedic Surgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Joon Beom Seo
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Min Lee
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| | - Namkug Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
- Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
| |
Collapse
|
8
|
Yun J, Ahn Y, Cho K, Oh SY, Lee SM, Kim N, Seo JB. Deep Learning for Automated Triaging of Stable Chest Radiographs in a Follow-up Setting. Radiology 2023; 309:e230606. [PMID: 37874243 DOI: 10.1148/radiol.230606] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2023]
Abstract
Background Most artificial intelligence algorithms that interpret chest radiographs are restricted to an image from a single time point. However, in clinical practice, multiple radiographs are used for longitudinal follow-up, especially in intensive care units (ICUs). Purpose To develop and validate a deep learning algorithm using thoracic cage registration and subtraction to triage pairs of chest radiographs showing no change by using longitudinal follow-up data. Materials and Methods A deep learning algorithm was retrospectively developed using baseline and follow-up chest radiographs in adults from January 2011 to December 2018 at a tertiary referral hospital. Two thoracic radiologists reviewed randomly selected pairs of "change" and "no change" images to establish the ground truth, including normal or abnormal status. Algorithm performance was evaluated using area under the receiver operating characteristic curve (AUC) analysis in a validation set and temporally separated internal test sets (January 2019 to August 2021) from the emergency department (ED) and ICU. Threshold calibration for the test sets was conducted, and performance with 40% and 60% triage thresholds was assessed. Results This study included 3 304 996 chest radiographs in 329 036 patients (mean age, 59 years ± 14 [SD]; 170 433 male patients). The training set included 550 779 pairs of radiographs. The validation set included 1620 pairs (810 no change, 810 change). The test sets included 533 pairs (ED; 265 no change, 268 change) and 600 pairs (ICU; 310 no change, 290 change). The algorithm had AUCs of 0.77 (validation), 0.80 (ED), and 0.80 (ICU). With a 40% triage threshold, specificity was 88.4% (237 of 268 pairs) and 90.0% (261 of 290 pairs) in the ED and ICU, respectively. With a 60% triage threshold, specificity was 79.9% (214 of 268 pairs) and 79.3% (230 of 290 pairs) in the ED and ICU, respectively. For urgent findings (consolidation, pleural effusion, pneumothorax), specificity was 78.6%-100% (ED) and 85.5%-93.9% (ICU) with a 40% triage threshold. Conclusion The deep learning algorithm could triage pairs of chest radiographs showing no change while detecting urgent interval changes during longitudinal follow-up. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Czum in this issue.
Collapse
Affiliation(s)
- Jihye Yun
- From the Department of Radiology and Research Institute of Radiology (J.Y., Y.A., S.Y.O., S.M.L., J.B.S.) and Department of Convergence Medicine (K.C., N.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 138-736, Korea
| | - Yura Ahn
- From the Department of Radiology and Research Institute of Radiology (J.Y., Y.A., S.Y.O., S.M.L., J.B.S.) and Department of Convergence Medicine (K.C., N.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 138-736, Korea
| | - Kyungjin Cho
- From the Department of Radiology and Research Institute of Radiology (J.Y., Y.A., S.Y.O., S.M.L., J.B.S.) and Department of Convergence Medicine (K.C., N.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 138-736, Korea
| | - Sang Young Oh
- From the Department of Radiology and Research Institute of Radiology (J.Y., Y.A., S.Y.O., S.M.L., J.B.S.) and Department of Convergence Medicine (K.C., N.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 138-736, Korea
| | - Sang Min Lee
- From the Department of Radiology and Research Institute of Radiology (J.Y., Y.A., S.Y.O., S.M.L., J.B.S.) and Department of Convergence Medicine (K.C., N.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 138-736, Korea
| | - Namkug Kim
- From the Department of Radiology and Research Institute of Radiology (J.Y., Y.A., S.Y.O., S.M.L., J.B.S.) and Department of Convergence Medicine (K.C., N.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 138-736, Korea
| | - Joon Beom Seo
- From the Department of Radiology and Research Institute of Radiology (J.Y., Y.A., S.Y.O., S.M.L., J.B.S.) and Department of Convergence Medicine (K.C., N.K.), University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 138-736, Korea
| |
Collapse
|
9
|
Gong C, Jing C, Chen X, Pun CM, Huang G, Saha A, Nieuwoudt M, Li HX, Hu Y, Wang S. Generative AI for brain image computing and brain network computing: a review. Front Neurosci 2023; 17:1203104. [PMID: 37383107 PMCID: PMC10293625 DOI: 10.3389/fnins.2023.1203104] [Citation(s) in RCA: 24] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 05/22/2023] [Indexed: 06/30/2023] Open
Abstract
Recent years have witnessed a significant advancement in brain imaging techniques that offer a non-invasive approach to mapping the structure and function of the brain. Concurrently, generative artificial intelligence (AI) has experienced substantial growth, involving using existing data to create new content with a similar underlying pattern to real-world data. The integration of these two domains, generative AI in neuroimaging, presents a promising avenue for exploring various fields of brain imaging and brain network computing, particularly in the areas of extracting spatiotemporal brain features and reconstructing the topological connectivity of brain networks. Therefore, this study reviewed the advanced models, tasks, challenges, and prospects of brain imaging and brain network computing techniques and intends to provide a comprehensive picture of current generative AI techniques in brain imaging. This review is focused on novel methodological approaches and applications of related new methods. It discussed fundamental theories and algorithms of four classic generative models and provided a systematic survey and categorization of tasks, including co-registration, super-resolution, enhancement, classification, segmentation, cross-modality, brain network analysis, and brain decoding. This paper also highlighted the challenges and future directions of the latest work with the expectation that future research can be beneficial.
Collapse
Affiliation(s)
- Changwei Gong
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| | - Changhong Jing
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| | - Xuhang Chen
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Chi Man Pun
- Department of Computer and Information Science, University of Macau, Macau, China
| | - Guoli Huang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
| | - Ashirbani Saha
- Department of Oncology and School of Biomedical Engineering, McMaster University, Hamilton, ON, Canada
| | - Martin Nieuwoudt
- Institute for Biomedical Engineering, Stellenbosch University, Stellenbosch, South Africa
| | - Han-Xiong Li
- Department of Systems Engineering, City University of Hong Kong, Hong Kong, China
| | - Yong Hu
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong, China
| | - Shuqiang Wang
- Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- Department of Computer Science, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
10
|
Ramasubramanian B, Reddy VS, Chellappan V, Ramakrishna S. Emerging Materials, Wearables, and Diagnostic Advancements in Therapeutic Treatment of Brain Diseases. BIOSENSORS 2022; 12:1176. [PMID: 36551143 PMCID: PMC9775999 DOI: 10.3390/bios12121176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Revised: 12/07/2022] [Accepted: 12/07/2022] [Indexed: 06/17/2023]
Abstract
Among the most critical health issues, brain illnesses, such as neurodegenerative conditions and tumors, lower quality of life and have a significant economic impact. Implantable technology and nano-drug carriers have enormous promise for cerebral brain activity sensing and regulated therapeutic application in the treatment and detection of brain illnesses. Flexible materials are chosen for implantable devices because they help reduce biomechanical mismatch between the implanted device and brain tissue. Additionally, implanted biodegradable devices might lessen any autoimmune negative effects. The onerous subsequent operation for removing the implanted device is further lessened with biodegradability. This review expands on current developments in diagnostic technologies such as magnetic resonance imaging, computed tomography, mass spectroscopy, infrared spectroscopy, angiography, and electroencephalogram while providing an overview of prevalent brain diseases. As far as we are aware, there hasn't been a single review article that addresses all the prevalent brain illnesses. The reviewer also looks into the prospects for the future and offers suggestions for the direction of future developments in the treatment of brain diseases.
Collapse
Affiliation(s)
- Brindha Ramasubramanian
- Department of Mechanical Engineering, Center for Nanofibers & Nanotechnology, National University of Singapore, Singapore 117574, Singapore
- Institute of Materials Research and Engineering (IMRE), Agency for Science, Technology and Research (A*STAR), #08-03, 2 Fusionopolis Way, Innovis, Singapore 138634, Singapore
| | - Vundrala Sumedha Reddy
- Department of Mechanical Engineering, Center for Nanofibers & Nanotechnology, National University of Singapore, Singapore 117574, Singapore
| | - Vijila Chellappan
- Institute of Materials Research and Engineering (IMRE), Agency for Science, Technology and Research (A*STAR), #08-03, 2 Fusionopolis Way, Innovis, Singapore 138634, Singapore
| | - Seeram Ramakrishna
- Department of Mechanical Engineering, Center for Nanofibers & Nanotechnology, National University of Singapore, Singapore 117574, Singapore
| |
Collapse
|