1
|
Dietrich N, Bradbury NC, Loh C. Prompt Engineering for Large Language Models in Interventional Radiology. AJR Am J Roentgenol 2025. [PMID: 40334089 DOI: 10.2214/ajr.25.32956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/09/2025]
Abstract
Prompt engineering plays a crucial role in optimizing artificial intelligence (AI) and large language model (LLM) outputs by refining input structure, a key factor in medical applications where precision and reliability are paramount. This Clinical Perspective provides an overview of prompt engineering techniques and their relevance to interventional radiology (IR). It explores key strategies, including zero-shot, one- or few-shot, chain-of-thought, tree-of-thought, self-consistency, and directional stimulus prompting, demonstrating their application in IR-specific contexts. Practical examples illustrate how these techniques can be effectively structured for workplace and clinical use. Additionally, the article discusses best practices for designing effective prompts and addresses challenges in the clinical use of generative AI, including data privacy and regulatory concerns. It concludes with an outlook on the future of generative AI in IR, highlighting advances including retrieval-augmented generation, domain-specific LLMs, and multimodal models.
Collapse
Affiliation(s)
- Nicholas Dietrich
- Temerty Faculty of Medicine, University of Toronto, 1 King's College Cir, Toronto, Ontario, Canada M5S 1A8
| | - Nicholas C Bradbury
- University of North Dakota School of Medicine and Health Sciences, 1301 N Columbia Rd, Grand Forks, ND, USA 58203
| | - Christopher Loh
- University of North Dakota School of Medicine and Health Sciences, 1301 N Columbia Rd, Grand Forks, ND, USA 58203
| |
Collapse
|
2
|
Kurt-Bayrakdar S, Bayrakdar İŞ, Kuran A, Çelik Ö, Orhan K, Jagtap R. Advancing periodontal diagnosis: harnessing advanced artificial intelligence for patterns of periodontal bone loss in cone-beam computed tomography. Dentomaxillofac Radiol 2025; 54:268-278. [PMID: 39908459 PMCID: PMC12038236 DOI: 10.1093/dmfr/twaf011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Revised: 11/24/2024] [Accepted: 12/10/2024] [Indexed: 02/07/2025] Open
Abstract
OBJECTIVES The current study aimed to automatically detect tooth presence, tooth numbering, and types of periodontal bone defects from cone-beam CT (CBCT) images using a segmentation method with an advanced artificial intelligence (AI) algorithm. METHODS This study utilized a dataset of CBCT volumes collected from 502 individual subjects. Initially, 250 CBCT volumes were used for automatic tooth segmentation and numbering. Subsequently, CBCT volumes from 251 patients diagnosed with periodontal disease were employed to train an AI system to identify various periodontal bone defects using a segmentation method in web-based labelling software. In the third stage, CBCT images from 251 periodontally healthy subjects were combined with images from 251 periodontally diseased subjects to develop an AI model capable of automatically classifying patients as either periodontally healthy or periodontally diseased. Statistical evaluation included receiver operating characteristic curve analysis and confusion matrix model. RESULTS The area under the receiver operating characteristic curve (AUC) values for the models developed to segment teeth, total alveolar bone loss, supra-bony defects, infra-bony defects, perio-endo lesions, buccal defects, and furcation defects were 0.9594, 0.8499, 0.5052, 0.5613 (with cropping, AUC: 0.7488), 0.8893, 0.6780 (with cropping, AUC: 0.7592), and 0.6332 (with cropping, AUC: 0.8087), respectively. Additionally, the classification CNN model achieved an accuracy of 80% for healthy individuals and 76% for unhealthy individuals. CONCLUSIONS This study employed AI models on CBCT images to automatically detect tooth presence, numbering, and various periodontal bone defects, achieving high accuracy and demonstrating potential for enhancing dental diagnostics and patient care.
Collapse
Affiliation(s)
- Sevda Kurt-Bayrakdar
- Department of Periodontology, Faculty of Dentistry, Eskişehir Osmangazi University, Eskisehir, 26240, Turkey
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS, 39216, United States
| | - İbrahim Şevki Bayrakdar
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS, 39216, United States
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, 26240, Turkey
| | - Alican Kuran
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kocaeli University, Kocaeli, 41190, Turkey
| | - Özer Çelik
- Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, 26240, Turkey
| | - Kaan Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, 06560, Turkey
| | - Rohan Jagtap
- Division of Oral and Maxillofacial Radiology, Department of Care Planning and Restorative Sciences, University of Mississippi Medical Center School of Dentistry, Jackson, MS, 39216, United States
| |
Collapse
|
3
|
Liu YK, Cisneros J, Nair G, Stevens C, Castillo R, Vinogradskiy Y, Castillo E. Perfusion estimation from dynamic non-contrast computed tomography using self-supervised learning and a physics-inspired U-net transformer architecture. Int J Comput Assist Radiol Surg 2025; 20:959-970. [PMID: 39832070 PMCID: PMC12055896 DOI: 10.1007/s11548-025-03323-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2024] [Accepted: 01/07/2025] [Indexed: 01/22/2025]
Abstract
PURPOSE Pulmonary perfusion imaging is a key lung health indicator with clinical utility as a diagnostic and treatment planning tool. However, current nuclear medicine modalities face challenges like low spatial resolution and long acquisition times which limit clinical utility to non-emergency settings and often placing extra financial burden on the patient. This study introduces a novel deep learning approach to predict perfusion imaging from non-contrast inhale and exhale computed tomography scans (IE-CT). METHODS We developed a U-Net Transformer architecture modified for Siamese IE-CT inputs, integrating insights from physical models and utilizing a self-supervised learning strategy tailored for lung function prediction. We aggregated 523 IE-CT images from nine different 4DCT imaging datasets for self-supervised training, aiming to learn a low-dimensional IE-CT feature space by reconstructing image volumes from random data augmentations. Supervised training for perfusion prediction used this feature space and transfer learning on a cohort of 44 patients who had both IE-CT and single-photon emission CT (SPECT/CT) perfusion scans. RESULTS Testing with random bootstrapping, we estimated the mean and standard deviation of the spatial Spearman correlation between our predictions and the ground truth (SPECT perfusion) to be 0.742 ± 0.037, with a mean median correlation of 0.792 ± 0.036. These results represent a new state-of-the-art accuracy for predicting perfusion imaging from non-contrast CT. CONCLUSION Our approach combines low-dimensional feature representations of both inhale and exhale images into a deep learning model, aligning with previous physical modeling methods for characterizing perfusion from IE-CT. This likely contributes to the high spatial correlation with ground truth. With further development, our method could provide faster and more accurate lung function imaging, potentially expanding its clinical applications beyond what is currently possible with nuclear medicine.
Collapse
Affiliation(s)
- Yi-Kuan Liu
- Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Jorge Cisneros
- Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA
| | - Girish Nair
- Division of Pulmonary and Critical Care, Beaumont Health, Royal Oak, MI, USA
| | - Craig Stevens
- Division of Radiation Oncology, Beaumont Health, Royal Oak, MI, USA
| | - Richard Castillo
- Division of Radiation Oncology, Emory University, Atlanta, GA, USA
| | | | - Edward Castillo
- Department of Biomedical Engineering, The University of Texas at Austin, Austin, TX, USA.
| |
Collapse
|
4
|
Yasaka K, Kawamura M, Sonoda Y, Kubo T, Kiryu S, Abe O. Large multimodality model fine-tuned for detecting breast and esophageal carcinomas on CT: a preliminary study. Jpn J Radiol 2025; 43:779-786. [PMID: 39668277 PMCID: PMC12052878 DOI: 10.1007/s11604-024-01718-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2024] [Accepted: 12/02/2024] [Indexed: 12/14/2024]
Abstract
PURPOSE This study aimed to develop a large multimodality model (LMM) that can detect breast and esophageal carcinomas on chest contrast-enhanced CT. MATERIALS AND METHODS In this retrospective study, CT images of 401 (age, 62.9 ± 12.9 years; 169 males), 51 (age, 65.5 ± 11.6 years; 23 males), and 120 (age, 64.6 ± 14.2 years; 60 males) patients were used in the training, validation, and test phases. The numbers of CT images with breast carcinoma, esophageal carcinoma, and no lesion were 927, 2180, and 2087; 80, 233, and 270; and 184, 246, and 6919 for the training, validation, and test datasets, respectively. The LMM was fine-tuned using CT images as input and text data ("suspicious of breast carcinoma"/ "suspicious of esophageal carcinoma"/ "no lesion") as reference data on a desktop computer equipped with a single graphic processing unit. Because of the random nature of the training process, supervised learning was performed 10 times. The performance of the best performing model on the validation dataset was further tested using the time-independent test dataset. The detection performance was evaluated by calculating the area under the receiver operating characteristic curve (AUC). RESULTS The sensitivities of the fine-tuned LMM for detecting breast and esophageal carcinomas in the test dataset were 0.929 and 0.951, respectively. The diagnostic performance of the fine-tuned LMM for detecting breast and esophageal carcinomas was high, with AUCs of 0.890 (95%CI 0.871-0.909) and 0.880 (95%CI 0.865-0.894), respectively. CONCLUSIONS The fine-tuned LMM could detect both breast and esophageal carcinomas on chest contrast-enhanced CT with high diagnostic performance. Usefulness of large multimodality models in chest cancer imaging has not been assessed so far. The fine-tuned large multimodality model could detect breast and esophageal carcinomas with high diagnostic performance (area under the receiver operating characteristic curve of 0.890 and 0.880, respectively).
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Motohide Kawamura
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yuki Sonoda
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takatoshi Kubo
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
5
|
Chong JJR, Kirpalani A, Moreland R, Colak E. Artificial Intelligence in Gastrointestinal Imaging: Advances and Applications. Radiol Clin North Am 2025; 63:477-490. [PMID: 40221188 DOI: 10.1016/j.rcl.2024.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/14/2025]
Abstract
While artificial intelligence (AI) has shown considerable progress in many areas of medical imaging, applications in abdominal imaging, particularly for the gastrointestinal (GI) system, have notably lagged behind advancements in other body regions. This article reviews foundational concepts in AI and highlights examples of AI applications in GI tract imaging. The discussion on AI applications includes acute & emergent GI imaging, inflammatory bowel disease, oncology, and other miscellaneous applications. It concludes with a discussion of important considerations for implementing AI tools in clinical practice, and steps we can take to accelerate future developments in the field.
Collapse
Affiliation(s)
- Jaron J R Chong
- Department of Medical Imaging, Schulich School of Medicine and Dentistry, Western University, 800 Commissioners Road East, London, Ontario N6A 5W9, Canada
| | - Anish Kirpalani
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada; Department of Medical Imaging, St. Michael's Hospital, Unity Health Toronto, 30 Bond Street, Toronto, Ontario M5B 1C9, Canada; Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, Ontario, Canada
| | - Robert Moreland
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada; Department of Medical Imaging, St. Michael's Hospital, Unity Health Toronto, 30 Bond Street, Toronto, Ontario M5B 1C9, Canada
| | - Errol Colak
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, Canada; Department of Medical Imaging, St. Michael's Hospital, Unity Health Toronto, 30 Bond Street, Toronto, Ontario M5B 1C9, Canada; Li Ka Shing Knowledge Institute, Unity Health Toronto, Toronto, Ontario, Canada.
| |
Collapse
|
6
|
Vojtíšek R, Baxa J, Hošek P, Kovářová P, Vítovec M, Sukovská E, Kosťun J, Vlasák P, Presl J, Ferda J, Fínek J. Association of long-term treatment outcomes with changes in PET/MRI characteristics and the type of early treatment response during concurrent radiochemotherapy in patients with locally advanced cervical cancer. Strahlenther Onkol 2025; 201:546-560. [PMID: 40163089 PMCID: PMC12014719 DOI: 10.1007/s00066-025-02389-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Accepted: 02/16/2025] [Indexed: 04/02/2025]
Abstract
PURPOSE We aimed to find predictive tumour characteristics as detected by interim positron-emission tomography/magnetic resonance imaging (PET/MRI) in cervical cancer patients. We also investigated the type of interim response. Furthermore, we compared the investigated parameters with disease-free (DFS) and overall survival (OS) outcomes. METHODS We evaluated 108 patients treated between August 2015 and January 2023 with external-beam radiotherapy (EBRT) and image-guided adaptive brachytherapy (IGABT) who had undergone pretreatment staging, subsequent mid-treatment evaluation after completed EBRT and definitive restaging 3 months after completing the whole treatment using PET/MRI. Patients were then divided into two groups based on the RECIST and PERCIST criteria: responders (achieving complete metabolic response, CMR) and non-responders (non-CMR). These two groups were compared using selected parameters obtained at pre-PET/MRI and mid-PET/MRI. The early response to treatment as evaluated by mid-PET/MRI was categorized into three types: interim complete metabolic response, interim nodal response and interim nodal persistence. RESULTS Mid-TLG‑S (the sum of total lesion glycolysis for the primary tumour plus pelvic and para-aortic lymph nodes) parameter showed the best discriminatory ability for predicting non-CMR. The second factor with significant discriminatory ability was mid-MTV‑S (the sum of the metabolic tumour volume of the primary tumour plus pelvic and para-aortic lymph nodes). The strongest factor, mid-TLG‑S, showed a sensitivity of 40% and a specificity of 90% at a threshold value of 70. We found a statistically significant association of DFS and OS with the following parameters: number of chemotherapy cycles, early response type and CMR vs. non-CMR. CONCLUSION We were able to identify thresholds for selected parameters that can be used to identify patients who are more likely to have worse DFS and OS. The type of early response during concurrent chemoradiotherapy (CCRT) was also significantly associated with DFS and OS. These aspects represent an important contribution to the possible stratification of patients for subsequent individualised adjuvant treatment.
Collapse
Affiliation(s)
- Radovan Vojtíšek
- Faculty of Medicine, Charles University, Pilsen, Czech Republic.
- Department of Oncology and Radiotherapy, University Hospital in Pilsen, alej Svobody 80, 304 60, Pilsen, Czech Republic.
| | - Jan Baxa
- Department of Imaging Methods, University Hospital in Pilsen, alej Svobody 80, 304 60, Pilsen, Czech Republic
| | - Petr Hošek
- Biomedical Center, Faculty of Medicine in Pilsen, Charles University, alej Svobody 76, 323 00, Pilsen, Czech Republic
| | - Petra Kovářová
- Department of Oncology and Radiotherapy, University Hospital in Pilsen, alej Svobody 80, 304 60, Pilsen, Czech Republic
| | - Martin Vítovec
- Department of Imaging Methods, University Hospital in Pilsen, alej Svobody 80, 304 60, Pilsen, Czech Republic
| | - Emília Sukovská
- Department of Oncology and Radiotherapy, University Hospital in Pilsen, alej Svobody 80, 304 60, Pilsen, Czech Republic
| | - Jan Kosťun
- Department of Gynecology and Obstetrics, University Hospital in Pilsen, alej Svobody 80, 304 60, Pilsen, Czech Republic
| | - Pavel Vlasák
- Department of Gynecology and Obstetrics, University Hospital in Pilsen, alej Svobody 80, 304 60, Pilsen, Czech Republic
| | - Jiří Presl
- Department of Gynecology and Obstetrics, University Hospital in Pilsen, alej Svobody 80, 304 60, Pilsen, Czech Republic
| | - Jiří Ferda
- Department of Imaging Methods, University Hospital in Pilsen, alej Svobody 80, 304 60, Pilsen, Czech Republic
| | - Jindřich Fínek
- Department of Oncology and Radiotherapy, University Hospital in Pilsen, alej Svobody 80, 304 60, Pilsen, Czech Republic
| |
Collapse
|
7
|
Wang X, Zhu MX, Wang JF, Liu P, Zhang LY, Zhou Y, Lin XX, Du YD, He KL. Multivariable prognostic models for post-hepatectomy liver failure: An updated systematic review. World J Hepatol 2025; 17:103330. [PMID: 40308827 PMCID: PMC12038414 DOI: 10.4254/wjh.v17.i4.103330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/18/2024] [Revised: 02/28/2025] [Accepted: 03/21/2025] [Indexed: 04/25/2025] Open
Abstract
BACKGROUND Partial hepatectomy continues to be the primary treatment approach for liver tumors, and post-hepatectomy liver failure (PHLF) remains the most critical life-threatening complication following surgery. AIM To comprehensively review the PHLF prognostic models developed in recent years and objectively assess the risk of bias in these models. METHODS This review followed the Checklist for Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guideline. Three databases were searched from November 2019 to December 2022, and references as well as cited literature in all included studies were manually screened in March 2023. Based on the defined inclusion criteria, articles on PHLF prognostic models were selected, and data from all included articles were extracted by two independent reviewers. The PROBAST was used to evaluate the quality of each included article. RESULTS A total of thirty-four studies met the eligibility criteria and were included in the analysis. Nearly all of the models (32/34, 94.1%) were developed and validated exclusively using private data sources. Predictive variables were categorized into five distinct types, with the majority of studies (32/34, 94.1%) utilizing multiple types of data. The area under the curve for the training models included ranged from 0.697 to 0.956. Analytical issues resulted in a high risk of bias across all studies included. CONCLUSION The validation performance of the existing models was substantially lower compared to the development models. All included studies were evaluated as having a high risk of bias, primarily due to issues within the analytical domain. The progression of modeling technology, particularly in artificial intelligence modeling, necessitates the use of suitable quality assessment tools.
Collapse
Affiliation(s)
- Xiao Wang
- Department of Hepatobiliary Surgery, Chinese PLA 970 Hospital, Yantai 264001, Shandong Province, China
- Medical Big Data Research Center, Chinese PLA General Hospital, Beijing 100853, China
| | - Ming-Xiang Zhu
- Medical Big Data Research Center, Chinese PLA General Hospital, Beijing 100853, China
- Medical School of Chinese PLA, Chinese PLA General Hospital, Beijing 100853, China
| | - Jun-Feng Wang
- Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences, Utrecht University, Utrecht 358 4CG, Netherlands
| | - Pan Liu
- Medical Big Data Research Center, Chinese PLA General Hospital, Beijing 100853, China
| | - Li-Yuan Zhang
- China National Clinical Research Center for Neurological Diseases, Beijing 100853, China
| | - You Zhou
- Medical Big Data Research Center, Chinese PLA General Hospital, Beijing 100853, China
- School of Medicine, Nankai University, Tianjin 300071, China
| | - Xi-Xiang Lin
- Medical Big Data Research Center, Chinese PLA General Hospital, Beijing 100853, China
| | - Ying-Dong Du
- Department of Hepatobiliary Surgery, Chinese PLA 970 Hospital, Yantai 264001, Shandong Province, China
| | - Kun-Lun He
- Medical Big Data Research Center, Chinese PLA General Hospital, Beijing 100853, China.
| |
Collapse
|
8
|
Yasaka K, Asari Y, Morita Y, Kurokawa M, Tajima T, Akai H, Yoshioka N, Akahane M, Ohtomo K, Abe O, Kiryu S. Super-resolution deep learning reconstruction to evaluate lumbar spinal stenosis status on magnetic resonance myelography. Jpn J Radiol 2025:10.1007/s11604-025-01787-5. [PMID: 40266548 DOI: 10.1007/s11604-025-01787-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2025] [Accepted: 04/06/2025] [Indexed: 04/24/2025]
Abstract
PURPOSE To investigate whether super-resolution deep learning reconstruction (SR-DLR) of MR myelography-aided evaluations of lumbar spinal stenosis. MATERIAL AND METHODS In this retrospective study, lumbar MR myelography of 40 patients (16 males and 24 females; mean age, 59.4 ± 31.8 years) were analyzed. Using the MR imaging data, MR myelography was separately reconstructed via SR-DLR, deep learning reconstruction (DLR), and conventional zero-filling interpolation (ZIP). Three radiologists, blinded to patient background data and MR reconstruction information, independently evaluated the image sets in terms of the following items: the numbers of levels affected by lumbar spinal stenosis; and cauda equina depiction, sharpness, noise, artifacts, and overall image quality. RESULTS The median interobserver agreement in terms of the numbers of lumbar spinal stenosis levels were 0.819, 0.735, and 0.729 for SR-DLR, DLR, and ZIP images, respectively. The imaging quality of the cauda equina, and image sharpness, noise, and overall quality on SR-DLR images were significantly better than those on DLR and ZIP images, as rated by all readers (p < 0.001, Wilcoxon signed-rank test). No significant differences were observed for artifacts on SR-DLR against DLR and ZIP. CONCLUSIONS SR-DLR improved the image quality of lumbar MR myelographs compared to DLR and ZIP, and was associated with better interobserver agreement during assessment of lumbar spinal stenosis status.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Yusuke Asari
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Yuichi Morita
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Mariko Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-Ku, Tokyo, 108-8329, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-Ku, Tokyo, 108-8639, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi, 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan.
| |
Collapse
|
9
|
Liu M, Gao W, Song D, Dong Y, Hong S, Cui C, Shi S, Wu K, Chen J, Xu J, Dong F. A deep learning-based calculation system for plaque stenosis severity on common carotid artery of ultrasound images. Vascular 2025; 33:349-356. [PMID: 38656244 DOI: 10.1177/17085381241246312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/26/2024]
Abstract
ObjectivesAssessment of plaque stenosis severity allows better management of carotid source of stroke. Our objective is to create a deep learning (DL) model to segment carotid intima-media thickness and plaque and further automatically calculate plaque stenosis severity on common carotid artery (CCA) transverse section ultrasound images.MethodsThree hundred and ninety images from 376 individuals were used to train (235/390, 60%), validate (39/390, 10%), and test (116/390, 30%) on a newly proposed CANet model. We also evaluated the model on an external test set of 115 individuals with 122 images acquired from another hospital. Comparative studies were conducted between our CANet model with four state-of-the-art DL models and two experienced sonographers to re-evaluate the present model's performance.ResultsOn the internal test set, our CANet model outperformed the four comparative models with Dice values of 95.22% versus 90.15%, 87.48%, 90.22%, and 91.56% on lumen-intima (LI) borders and 96.27% versus 91.40%, 88.94%, 91.19%, and 92.88% on media-adventitia (MA) borders. On the external test set, our model still produced excellent results with a Dice value of 92.41%. Good consistency of stenosis severity calculation was observed between CANet model and experienced sonographers, with Intraclass Correlation Coefficient (ICC) of 0.927 and 0.702, Pearson's Correlation Coefficient of 0.928 and 0.704 on internal and external test set, respectively.ConclusionsOur CANet model achieved excellent performance in the segmentation of carotid IMT and plaques as well as automated calculation of stenosis severity.
Collapse
Affiliation(s)
- Mengmeng Liu
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, Second Clinical College of Jinan University, Shenzhen People's Hospital, Shenzhen, PR China
| | - Wenjing Gao
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, Second Clinical College of Jinan University, Shenzhen People's Hospital, Shenzhen, PR China
| | - Di Song
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, Second Clinical College of Jinan University, Shenzhen People's Hospital, Shenzhen, PR China
| | - Yinghui Dong
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, Second Clinical College of Jinan University, Shenzhen People's Hospital, Shenzhen, PR China
| | - Shaofu Hong
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, Second Clinical College of Jinan University, Shenzhen People's Hospital, Shenzhen, PR China
| | - Chen Cui
- Illuminate, LLC, Shenzhen, China
- Microport Prophecy, Shanghai, China
| | - Siyuan Shi
- Illuminate, LLC, Shenzhen, China
- Microport Prophecy, Shanghai, China
| | - Kai Wu
- Illuminate, LLC, Shenzhen, China
- Microport Prophecy, Shanghai, China
| | - Jiayi Chen
- Illuminate, LLC, Shenzhen, China
- Microport Prophecy, Shanghai, China
| | - Jinfeng Xu
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, Second Clinical College of Jinan University, Shenzhen People's Hospital, Shenzhen, PR China
| | - Fajin Dong
- Department of Ultrasound, First Affiliated Hospital of Southern University of Science and Technology, Second Clinical College of Jinan University, Shenzhen People's Hospital, Shenzhen, PR China
| |
Collapse
|
10
|
Sankar H, Alagarsamy R, Lal B, Rana SS, Roychoudhury A, Barathi A, Ankush A. Role of artificial intelligence in magnetic resonance imaging-based detection of temporomandibular joint disorder: a systematic review. Br J Oral Maxillofac Surg 2025; 63:174-181. [PMID: 40087072 DOI: 10.1016/j.bjoms.2024.12.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 11/02/2024] [Accepted: 12/18/2024] [Indexed: 03/16/2025]
Abstract
This systematic review aimed to evaluate the application of artificial intelligence (AI) in the identification of temporomandibular joint (TMJ) disc position in normal or temporomandibular joint disorder (TMD) individuals using magnetic resonance imaging (MRI). Database search was done in Pub med, Google scholar, Semantic scholar and Cochrane for studies on AI application to detect TMJ disc position in MRI till September 2023 adhering PRISMA guidelines. Data extraction included number of patients, number of TMJ/MRI, AI algorithm and performance metrics. Risk of bias was done with modified PROBAST tool. Seven studies were included (deep learning = 6, machine learning = 1). Sensitivity values (n = 7) ranged from 0.735 to 1, while specificity values (n = 4) ranged from 0.68 to 0.961. AI achieves accuracy levels exceeding 83%. MobileNetV2 and ResNet have revealed better performance metrics. Machine learning demonstrated the lowest accuracy 74.2%. Risk of bias was low (n = 6) and high (n = 1). Deep learning models showed reliable performance metrics for AI based detection of temporomandibular joint disc position in MRI. Future research is warranted with better standardisation of design and consistent reporting.
Collapse
Affiliation(s)
- Hariram Sankar
- Department of Dentistry, All India Institute of Medical Sciences, Bathinda, India
| | - Ragavi Alagarsamy
- Department of Burns, Plastic and Maxillofacial Surgery, VMMC and Safdarjung hospital, New Delhi, India
| | - Babu Lal
- Department of Trauma and Emergency Medicine, All India Institute of Medical Sciences, Bhopal, Madhya Pradesh, India.
| | | | - Ajoy Roychoudhury
- Department of Oral & Maxillofacial surgery, All India Institute of Medical Sciences, New Delhi, India
| | | | - Ankush Ankush
- Department of Radiodiagnosis and Imaging, LNMC & JK Hospital, Bhopal, India
| |
Collapse
|
11
|
Xiao Y, Yang F, Deng Q, Ming Y, Tang L, Yue S, Li Z, Zhang B, Liang H, Huang J, Sun J. Comparison of conventional diffusion-weighted imaging and multiplexed sensitivity-encoding combined with deep learning-based reconstruction in breast magnetic resonance imaging. Magn Reson Imaging 2025; 117:110316. [PMID: 39716684 DOI: 10.1016/j.mri.2024.110316] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Revised: 12/17/2024] [Accepted: 12/18/2024] [Indexed: 12/25/2024]
Abstract
PURPOSE To evaluate the feasibility of multiplexed sensitivity-encoding (MUSE) with deep learning-based reconstruction (DLR) for breast imaging in comparison with conventional diffusion-weighted imaging (DWI) and MUSE alone. METHODS This study was conducted using conventional single-shot DWI and MUSE data of female participants who underwent breast magnetic resonance imaging (MRI) from June to December 2023. The k-space data in MUSE were reconstructed using both conventional reconstruction and DLR. Two experienced radiologists conducted quantitative analyses of DWI, MUSE, and MUSE-DLR images by obtaining the signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) of lesions and normal tissue and qualitative analyses by using a 5-point Likert scale to assess the image quality. Inter-reader agreement was assessed using the intraclass correlation coefficient (ICC). Image scores, SNR, CNR, and apparent diffusion coefficient (ADC) measurements among the three sequences were compared using the Friedman test, with significance defined at P < 0.05. RESULTS In evaluations of the images of 51 female participants using the three sequences, the two radiologists exhibited good agreement (ICC = 0.540-1.000, P < 0.05). MUSE-DLR showed significantly better SNR than MUSE (P < 0.001), while the ADC values within lesions and tissues did not differ significantly among the three sequences (P = 0.924, P = 0.636, respectively). In the subjective assessments, MUSE and MUSE-DLR scored significantly higher than conventional DWI in overall image quality, geometric distortion and axillary lymph node (P < 0.001). CONCLUSION In comparison with conventional DWI, MUSE-DLR yielded improved image quality with only a slightly longer acquisition time.
Collapse
Affiliation(s)
- Yitian Xiao
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Fan Yang
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Qiao Deng
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Yue Ming
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Lu Tang
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China
| | - Shuting Yue
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Zheng Li
- West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China
| | - Bo Zhang
- GE HealthCare MR Research, Beijing, China
| | | | - Juan Huang
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.
| | - Jiayu Sun
- Department of Radiology, West China Hospital of Sichuan University, Chengdu, China.
| |
Collapse
|
12
|
Lokaj B, Durand de Gevigney V, Djema DA, Zaghir J, Goldman JP, Bjelogrlic M, Turbé H, Kinkel K, Lovis C, Schmid J. Multimodal deep learning fusion of ultrafast-DCE MRI and clinical information for breast lesion classification. Comput Biol Med 2025; 188:109721. [PMID: 39978091 DOI: 10.1016/j.compbiomed.2025.109721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2024] [Revised: 01/17/2025] [Accepted: 01/17/2025] [Indexed: 02/22/2025]
Abstract
BACKGROUND Breast cancer is the most common cancer worldwide, and magnetic resonance imaging (MRI) constitutes a very sensitive technique for invasive cancer detection. When reviewing breast MRI examination, clinical radiologists rely on multimodal information, composed of imaging data but also information not present in the images such as clinical information. Most machine learning (ML) approaches are not well suited for multimodal data. However, attention-based architectures, such as Transformers, are flexible and therefore good candidates for integrating multimodal data. PURPOSE The aim of this study was to develop and evaluate a novel multimodal deep learning (DL) model combining ultrafast dynamic contrast-enhanced (UF-DCE) MRI images, lesion characteristics and clinical information for breast lesion classification. MATERIALS AND METHODS From 2019 to 2023, UF-DCE breast images and radiology reports of 240 patients were retrospectively collected from a single clinical center and annotated. Imaging data were constituted of volumes of interest (VOI) extracted around segmented lesions. Non-imaging data were constituted of both clinical (categorical) and geometrical (scalar) data. Clinical data were extracted from annotated reports and were associated to their corresponding lesions. We compared the diagnostic performances of traditional ML methods for non-imaging data, an image model based on the DL architecture, and a novel Transformer-based architecture, the Multimodal Sieve Transformer with Vision Transformer encoder (MMST-V). RESULTS The final dataset included 987 lesions (280 benign, 121 malignant lesions, and 586 benign lymph nodes) and 1081 reports. For classification with non-imaging data, scalar data had a greater influence on performances of lesion classification (Area under the receiver operating characteristic curve (AUROC) = 0.875 ± 0.042) than categorical data (AUROC = 0.680 ± 0.060). MMST-V achieved better performances (AUROC = 0.928 ± 0.027) than classification based on non-imaging data (AUROC = 0.900 ± 0.045), and imaging data only (AUROC = 0.863 ± 0.025). CONCLUSION The proposed MMST-V is an adaptative approach that can consider redundant information provided by multimodal information. It demonstrated better performances than unimodal methods. Results highlight that the combination of clinical patient data and detailed lesion information as additional clinical knowledge enhances the diagnostic performances of UF-DCE breast MRI.
Collapse
Affiliation(s)
- Belinda Lokaj
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland; Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland.
| | - Valentin Durand de Gevigney
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| | | | - Jamil Zaghir
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland; Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Jean-Philippe Goldman
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland; Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Mina Bjelogrlic
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland; Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Hugues Turbé
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland; Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Karen Kinkel
- Réseau Hospitalier Neuchâtelois, Neuchâtel, Switzerland
| | - Christian Lovis
- Department of Radiology and Medical Informatics, University of Geneva, Geneva, Switzerland; Division of Medical Information Sciences, Geneva University Hospitals, Geneva, Switzerland
| | - Jérôme Schmid
- Geneva School of Health Sciences, HES-SO University of Applied Sciences and Arts Western Switzerland, Delémont, Switzerland
| |
Collapse
|
13
|
Gong B, Khalvati F, Ertl-Wagner BB, Patlas MN. Artificial intelligence in emergency neuroradiology: Current applications and perspectives. Diagn Interv Imaging 2025; 106:135-142. [PMID: 39672753 DOI: 10.1016/j.diii.2024.11.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2024] [Revised: 11/19/2024] [Accepted: 11/19/2024] [Indexed: 12/15/2024]
Abstract
Emergency neuroradiology provides rapid diagnostic decision-making and guidance for management for a wide range of acute conditions involving the brain, head and neck, and spine. This narrative review aims at providing an up-to-date discussion about the state of the art of applications of artificial intelligence in emergency neuroradiology, which have substantially expanded in depth and scope in the past few years. A detailed analysis of machine learning and deep learning algorithms in several tasks related to acute ischemic stroke involving various imaging modalities, including a description of existing commercial products, is provided. The applications of artificial intelligence in acute intracranial hemorrhage and other vascular pathologies such as intracranial aneurysm and arteriovenous malformation are discussed. Other areas of emergency neuroradiology including infection, fracture, cord compression, and pediatric imaging are further discussed in turn. Based on these discussions, this article offers insight into practical considerations regarding the applications of artificial intelligence in emergency neuroradiology, calling for more development driven by clinical needs, attention to pediatric neuroimaging, and analysis of real-world performance.
Collapse
Affiliation(s)
- Bo Gong
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, M5T 1W7, Canada; Department of Computer Science. University of Toronto, Toronto, Ontario, M5S 2E4, Canada.
| | - Farzad Khalvati
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, M5T 1W7, Canada; Department of Diagnostic & Interventional Radiology, the Hospital for Sick Children, Toronto, Ontario, M5 G 1E8, Canada; Neurosciences and Mental Health, SickKids Research Institute, Toronto, Ontario, M5 G 0A4, Canada
| | - Birgit B Ertl-Wagner
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, M5T 1W7, Canada; Neurosciences and Mental Health, SickKids Research Institute, Toronto, Ontario, M5 G 0A4, Canada; Division of Neuroradiology, Department of Diagnostic & Interventional Radiology, The Hospital for Sick Children, Toronto, Ontario, M5 G 1E8, Canada
| | - Michael N Patlas
- Department of Medical Imaging, University of Toronto, Toronto, Ontario, M5T 1W7, Canada
| |
Collapse
|
14
|
Dreizin D, Cheng CT, Liao CH, Jindal A, Colak E. Artificial intelligence for abdominopelvic trauma imaging: trends, gaps, and future directions. Abdom Radiol (NY) 2025:10.1007/s00261-025-04816-z. [PMID: 40116889 DOI: 10.1007/s00261-025-04816-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2024] [Revised: 01/13/2025] [Accepted: 01/17/2025] [Indexed: 03/23/2025]
Abstract
Abdominopelvic trauma is a major cause of morbidity and mortality, typically resulting from high-energy mechanisms such as motor vehicle collisions and penetrating injuries. Admission abdominopelvic trauma CT, performed either selectively or as part of a whole-body CT protocol, has become the workhorse screening and surgical planning modality due to improvements in speed and image quality. Radiography remains an essential element of the secondary trauma survey, and Focused Assessment with Sonography for Trauma (FAST) scanning has added value for quick assessment of non-compressible hemorrhage in hemodynamically unstable patients. Complex and severe polytrauma cases often delay radiology report turnaround times, which can potentially impede urgent clinical decision-making. Artificial intelligence (AI) computer-aided detection and diagnosis (CAD) offers promising solutions for enhanced diagnostic efficiency and accuracy in abdominopelvic trauma imaging. Although commercial AI tools for abdominopelvic trauma are currently available for only a few use cases, the literature reveals robust research and development (R&D) of prototype tools. Multiscale convolutional neural networks (CNNs) and transformer-based models are capable of detecting and quantifying solid organ injuries, fractures, and hemorrhage with a high degree of precision. Further, generalist foundation models such as multimodal vision-language models (VLMs) can be adapted and fine-tuned using imaging, clinical, and text data for a range of tasks, including detection, quantitative visualization, prognostication, and report auto-generation. Despite their promise, for most use cases in abdominopelvic trauma, AI CAD tools remain in the pilot stages of technology readiness, with persistent challenges related to data availability; the need for open-access PACS compatible software pipelines for pre-clinical shadow-testing; lack of well-designed multi-institutional validation studies; and regulatory hurdles. This narrative review provides a snapshot of the current state of AI in abdominopelvic trauma, examining existing commercial tools; research and development throughout the technology readiness pipeline; and future directions in this domain.
Collapse
Affiliation(s)
- David Dreizin
- University of Maryland Trauma Radiology AI Laboratory (TRAIL), University of Maryland School of Medicine, Baltimore, USA
- Department of Diagnostic Radiology and Nuclear Medicine, R Adams Cowley Shock Trauma Center, University of Maryland School of Medicine, Baltimore, USA
| | - Chi-Tung Cheng
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Linkou, Taoyuan, Taiwan
| | - Chien-Hung Liao
- Department of Trauma and Emergency Surgery, Chang Gung Memorial Hospital Linkou, Linkou, Taoyuan, Taiwan
| | - Ankush Jindal
- University of Maryland Trauma Radiology AI Laboratory (TRAIL), University of Maryland School of Medicine, Baltimore, USA
| | - Errol Colak
- Department of Medical Imaging, University of Toronto, Toronto, Canada.
- Department of Medical Imaging, St. Michael's Hospital, Unity Health Toronto, Toronto, Canada.
| |
Collapse
|
15
|
Gou X, Feng A, Feng C, Cheng J, Hong N. Imaging genomics of cancer: a bibliometric analysis and review. Cancer Imaging 2025; 25:24. [PMID: 40038813 DOI: 10.1186/s40644-025-00841-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2024] [Accepted: 02/13/2025] [Indexed: 03/06/2025] Open
Abstract
BACKGROUND Imaging genomics is a burgeoning field that seeks to connections between medical imaging and genomic features. It has been widely applied to explore heterogeneity and predict responsiveness and disease progression in cancer. This review aims to assess current applications and advancements of imaging genomics in cancer. METHODS Literature on imaging genomics in cancer was retrieved and selected from PubMed, Web of Science, and Embase before July 2024. Detail information of articles, such as systems and imaging features, were extracted and analyzed. Citation information was extracted from Web of Science and Scopus. Additionally, a bibliometric analysis of the included studies was conducted using the Bibliometrix R package and VOSviewer. RESULTS A total of 370 articles were included in the study. The annual growth rate of articles on imaging genomics in cancer is 24.88%. China (133) and the USA (107) were the most productive countries. The top 2 keywords plus were "survival" and "classification". The current research mainly focuses on the central nervous system (121) and the genitourinary system (110, including 44 breast cancer articles). Despite different systems utilizing different imaging modalities, more than half of the studies in each system employed radiomics features. CONCLUSIONS Publication databases provide data support for imaging genomics research. The development of artificial intelligence algorithms, especially in feature extraction and model construction, has significantly advanced this field. It is conducive to enhancing the related-models' interpretability. Nonetheless, challenges such as the sample size and the standardization of feature extraction and model construction must overcome. And the research trends revealed in this study will guide the development of imaging genomics in the future and contribute to more accurate cancer diagnosis and treatment in the clinic.
Collapse
Affiliation(s)
- Xinyi Gou
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Aobo Feng
- College of Computer and Information, Inner Mongolia Medical University, Inner Mongolia, China
| | - Caizhen Feng
- Department of Radiology, Peking University People's Hospital, Beijing, China
| | - Jin Cheng
- Department of Radiology, Peking University People's Hospital, Beijing, China.
| | - Nan Hong
- Department of Radiology, Peking University People's Hospital, Beijing, China
| |
Collapse
|
16
|
Castellaccio A, Almeida Arostegui N, Palomo Jiménez M, Quiñones Tapia D, Bret Zurita M, Vañó Galván E. Artificial intelligence in cardiovascular magnetic resonance imaging. RADIOLOGIA 2025; 67:239-247. [PMID: 40187819 DOI: 10.1016/j.rxeng.2025.03.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 02/07/2024] [Indexed: 04/07/2025]
Abstract
Artificial intelligence is rapidly evolving and its possibilities are endless. Its primary applications in cardiac magnetic resonance imaging have focused on: image acquisition (in terms of acceleration and quality improvement); segmentation (in terms of saving time and reproducibility); tissue characterisation (including radiomic techniques and the non-contrast assessment of myocardial fibrosis); automatic diagnosis; and prognostic stratification. The aim of this article is to attempt to provide an overview of the current situation as preparation for the significant changes currently underway or imminent in the very near future.
Collapse
Affiliation(s)
- A Castellaccio
- Servicio de Resonancia Magnética y TC, Hospital Universitario Nuestra Señora del Rosario, Madrid, Spain.
| | - N Almeida Arostegui
- Servicio de Resonancia Magnética y TC, Hospital Universitario Nuestra Señora del Rosario, Madrid, Spain
| | - M Palomo Jiménez
- Servicio de Resonancia Magnética y TC, Hospital Universitario Nuestra Señora del Rosario, Madrid, Spain
| | - D Quiñones Tapia
- Servicio de Resonancia Magnética y TC, Hospital Universitario Nuestra Señora del Rosario, Madrid, Spain
| | - M Bret Zurita
- Servicio de Resonancia Magnética y TC, Hospital Universitario Nuestra Señora del Rosario, Madrid, Spain
| | - E Vañó Galván
- Servicio de Resonancia Magnética y TC, Hospital Universitario Nuestra Señora del Rosario, Madrid, Spain
| |
Collapse
|
17
|
D N S, Pai RM, Bhat SN, Pai M M M. Assessment of perceived realism in AI-generated synthetic spine fracture CT images. Technol Health Care 2025; 33:931-944. [PMID: 40105176 DOI: 10.1177/09287329241291368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/20/2025]
Abstract
BackgroundDeep learning-based decision support systems require synthetic images generated by adversarial networks, which require clinical evaluation to ensure their quality.ObjectiveThe study evaluates perceived realism of high-dimension synthetic spine fracture CT images generated Progressive Growing Generative Adversarial Networks (PGGANs).Method: The study used 2820 spine fracture CT images from 456 patients to train an PGGAN model. The model synthesized images up to 512 × 512 pixels, and the realism of the generated images was assessed using Visual Turing Tests and Fracture Identification Test. Three spine surgeons evaluated the images, and clinical evaluation results were statistically analysed.Result: Spine surgeons have an average prediction accuracy of nearly 50% during clinical evaluations, indicating difficulty in distinguishing between real and generated images. The accuracy varies for different dimensions, with synthetic images being more realistic, especially in 512 × 512-dimension images. During FIT, among 16 generated images of each fracture type, 13-15 images were correctly identified, indicating images are more realistic and clearly depict fracture lines in 512 × 512 dimensions.ConclusionThe study reveals that AI-based PGGAN can generate realistic synthetic spine fracture CT images up to 512 × 512 pixels, making them difficult to distinguish from real images, and improving the automatic spine fracture type detection system.
Collapse
Affiliation(s)
- Sindhura D N
- Department of Data Science and Computer Applications, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Radhika M Pai
- Department of Data Science and Computer Applications, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Shyamasunder N Bhat
- Department of Orthopaedics, Kasturba Medical College, Manipal, Manipal Academy of Higher Education, Manipal, India
| | - Manohara Pai M M
- Department of Information and Communication Technology, Manipal Institute of Technology, Manipal, Manipal Academy of Higher Education, Manipal, India
| |
Collapse
|
18
|
AlGhaihab A, Moretti AJ, Reside J, Tuzova L, Huang YS, Tyndall DA. Automatic Detection of Radiographic Alveolar Bone Loss in Bitewing and Periapical Intraoral Radiographs Using Deep Learning Technology: A Preliminary Evaluation. Diagnostics (Basel) 2025; 15:576. [PMID: 40075823 PMCID: PMC11899607 DOI: 10.3390/diagnostics15050576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2025] [Revised: 02/23/2025] [Accepted: 02/24/2025] [Indexed: 03/14/2025] Open
Abstract
Background/Objective: Periodontal disease is a prevalent inflammatory condition affecting the supporting structures of teeth, with radiographic bone loss (RBL) being a critical diagnostic marker. The accurate and consistent evaluation of RBL is essential for the staging and grading of periodontitis, as outlined by the 2017 AAP/EFP Classification. Advanced tools such as deep learning (DL) technology, including Denti.AI, an FDA-cleared software utilizing convolutional neural networks (CNNs), offer the potential for enhancing diagnostic accuracy. This study evaluated the diagnostic accuracy of Denti.AI for detecting RBL in intraoral radiographs. Methods: A dataset of 39 intraoral radiographs (22 periapical and 17 bitewing), covering 316 tooth surfaces (123 periapical and 193 bitewing), was selected from a de-identified pool of 500 radiographs provided by Denti.AI. RBL was assessed using the 2017 AAP/EFP Classification. A consensus panel of three board-certified dental specialists served as the reference standard. Performance metrics, including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), accuracy, and mean absolute error (MAE), were calculated. Results: For periapical radiographs, Denti.AI achieved a sensitivity of 76%, specificity of 86%, PPV of 83%, NPV of 80%, and accuracy of 81%, with an MAE of 0.046%. For bitewing radiographs, sensitivity was 65%, specificity was 90%, PPV was 88%, NPV was 70%, and accuracy was 76%, with an MAE of 0.499 mm. Conclusions: Denti.AI demonstrated clinically acceptable performance in detecting RBL and shows potential as an adjunctive diagnostic tool, supporting clinical decision-making. While performance was robust for periapical radiographs, further optimization may enhance its accuracy for bitewing radiographs.
Collapse
Affiliation(s)
- Amjad AlGhaihab
- Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11481, Saudi Arabia
- Department of Diagnostic Sciences, Oral and Maxillofacial Radiology, Adams School of Dentistry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (Y.-S.H.); (D.A.T.)
- King Abdullah International Medical Research Center, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
| | - Antonio J. Moretti
- Department of Periodontology, Endodontics and Dental Hygiene Adams School of Dentistry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (A.J.M.); (J.R.)
| | - Jonathan Reside
- Department of Periodontology, Endodontics and Dental Hygiene Adams School of Dentistry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (A.J.M.); (J.R.)
| | | | - Yiing-Shiuan Huang
- Department of Diagnostic Sciences, Oral and Maxillofacial Radiology, Adams School of Dentistry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (Y.-S.H.); (D.A.T.)
| | - Donald A. Tyndall
- Department of Diagnostic Sciences, Oral and Maxillofacial Radiology, Adams School of Dentistry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA; (Y.-S.H.); (D.A.T.)
| |
Collapse
|
19
|
AlGhaihab A, Moretti AJ, Reside J, Tuzova L, Tyndall DA. An Assessment of Deep Learning's Impact on General Dentists' Ability to Detect Alveolar Bone Loss in 2D Intraoral Radiographs. Diagnostics (Basel) 2025; 15:467. [PMID: 40002618 PMCID: PMC11854650 DOI: 10.3390/diagnostics15040467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2025] [Revised: 02/05/2025] [Accepted: 02/10/2025] [Indexed: 02/27/2025] Open
Abstract
Background/Objective: Deep learning (DL) technology has shown potential in enhancing diagnostic accuracy in dentomaxillofacial radiology, particularly for detecting carious lesions, apical lesions, and periodontal bone loss. However, its effect on general dentists' ability to detect radiographic bone loss (RBL) in clinical practice remains unclear. This study investigates the impact of the Denti.AI DL technology on general dentists' ability to identify bone loss in intraoral radiographs, addressing this gap in the literature. Methods: Ten dentists from the university's dental clinics independently assessed 26 intraoral radiographs (periapical and bitewing) for bone loss using a Likert scale probability index with and without DL assistance. The participants viewed images on identical monitors with controlled lighting. This study generated 3940 data points for analysis. The statistical analyses included receiver operating characteristic (ROC) curves, area under the curve (AUC), and ANOVA tests. Results: Most dentists showed minor improvement in detecting bone loss on periapical radiographs when using DL. For bitewing radiographs, only a few dentists showed minor improvement. Overall, the difference in diagnostic accuracy between evaluations with and without DL was minimal (0.008). The differences in AUC for periapical and bitewing radiographs were 0.031 and -0.009, respectively, and were not statistically significant. Conclusions: This study found no statistically significant improvement in experienced dentists' diagnostic accuracy for detecting bone loss in intraoral radiographs when using Denti.AI deep learning technology.
Collapse
Affiliation(s)
- Amjad AlGhaihab
- Department of Maxillofacial Surgery and Diagnostic Sciences, College of Dentistry, King Saud bin Abdulaziz University for Health Sciences, Riyadh 11481, Saudi Arabia
- King Abdullah International Medical Research Center, Ministry of National Guard Health Affairs, Riyadh 11481, Saudi Arabia
- Department of Diagnostic Sciences, Oral and Maxillofacial Radiology, Adams School of Dentistry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Antonio J. Moretti
- Department of Periodontology, Endodontics and Dental Hygiene, Adams School of Dentistry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | - Jonathan Reside
- Department of Periodontology, Endodontics and Dental Hygiene, Adams School of Dentistry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| | | | - Donald A. Tyndall
- Department of Diagnostic Sciences, Oral and Maxillofacial Radiology, Adams School of Dentistry, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, USA
| |
Collapse
|
20
|
de Camargo TFO, Ribeiro GAS, da Silva MCB, da Silva LO, Torres PPTES, Rodrigues DDSDS, de Santos MON, Filho WS, Rosa MEE, Novaes MDA, Massarutto TA, Junior OL, Yanata E, Reis MRDC, Szarf G, Netto PVS, de Paiva JPQ. Clinical validation of an artificial intelligence algorithm for classifying tuberculosis and pulmonary findings in chest radiographs. Front Artif Intell 2025; 8:1512910. [PMID: 39991462 PMCID: PMC11843218 DOI: 10.3389/frai.2025.1512910] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 01/16/2025] [Indexed: 02/25/2025] Open
Abstract
Background Chest X-ray (CXR) interpretation is critical in diagnosing various lung diseases. However, physicians, not specialists, are often the first ones to read them, frequently facing challenges in accurate interpretation. Artificial Intelligence (AI) algorithms could be of great help, but using real-world data is crucial to ensure their effectiveness in diverse healthcare settings. This study evaluates a deep learning algorithm designed for CXR interpretation, focusing on its utility for non-specialists in thoracic radiology physicians. Purpose To assess the performance of a Convolutional Neural Networks (CNNs)-based AI algorithm in interpreting CXRs and compare it with a team of physicians, including thoracic radiologists, who served as the gold-standard. Methods A retrospective study from January 2021 to July 2023 evaluated an algorithm with three independent models for Lung Abnormality, Radiological Findings, and Tuberculosis. The algorithm's performance was measured using accuracy, sensitivity, and specificity. Two groups of physicians validated the model: one with varying specialties and experience levels in interpreting chest radiographs (Group A) and another of board-certified thoracic radiologists (Group B). The study also assessed the agreement between the two groups on the algorithm's heatmap and its influence on their decisions. Results In the internal validation, the Lung Abnormality and Tuberculosis models achieved an AUC of 0.94, while the Radiological Findings model yielded a mean AUC of 0.84. During the external validation, utilizing the ground truth generated by board-certified thoracic radiologists, the algorithm achieved better sensitivity in 6 out of 11 classes than physicians with varying experience levels. Furthermore, Group A physicians demonstrated higher agreement with the algorithm in identifying markings in specific lung regions than Group B (37.56% Group A vs. 21.75% Group B). Additionally, physicians declared that the algorithm did not influence their decisions in 93% of the cases. Conclusion This retrospective clinical validation study assesses an AI algorithm's effectiveness in interpreting Chest X-rays (CXR). The results show the algorithm's performance is comparable to Group A physicians, using gold-standard analysis (Group B) as the reference. Notably, both Groups reported minimal influence of the algorithm on their decisions in most cases.
Collapse
Affiliation(s)
- Thiago Fellipe Ortiz de Camargo
- Image Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
- Electrical, Mechanical and Computer Engineering School, Federal University of Goias, Goias, Brazil
| | - Guilherme Alberto Sousa Ribeiro
- Image Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
- Electrical, Mechanical and Computer Engineering School, Federal University of Goias, Goias, Brazil
| | | | | | | | | | | | | | | | | | | | | | - Elaine Yanata
- Image Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | | | - Gilberto Szarf
- Image Research Center, Hospital Israelita Albert Einstein, São Paulo, Brazil
| | | | | |
Collapse
|
21
|
De Rosa S, Bignami E, Bellini V, Battaglini D. The Future of Artificial Intelligence Using Images and Clinical Assessment for Difficult Airway Management. Anesth Analg 2025; 140:317-325. [PMID: 38557728 PMCID: PMC11687942 DOI: 10.1213/ane.0000000000006969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/13/2024] [Indexed: 04/04/2024]
Abstract
Artificial intelligence (AI) algorithms, particularly deep learning, are automatic and sophisticated methods that recognize complex patterns in imaging data providing high qualitative assessments. Several machine-learning and deep-learning models using imaging techniques have been recently developed and validated to predict difficult airways. Despite advances in AI modeling. In this review article, we describe the advantages of using AI models. We explore how these methods could impact clinical practice. Finally, we discuss predictive modeling for difficult laryngoscopy using machine-learning and the future approach with intelligent intubation devices.
Collapse
Affiliation(s)
- Silvia De Rosa
- From the Centre for Medical Sciences – CISMed, University of Trento, Trento, Italy
- Anesthesia and Intensive Care, Santa Chiara Regional Hospital, APSS Trento, Trento, Italy
| | - Elena Bignami
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Valentina Bellini
- Anesthesiology, Critical Care and Pain Medicine Division, Department of Medicine and Surgery, University of Parma, Parma, Italy
| | - Denise Battaglini
- Anesthesia and Intensive Care, IRCCS Ospedale Policlinico San Martino, Genova, Italy
| |
Collapse
|
22
|
Yasaka K, Kanzawa J, Kanemaru N, Koshino S, Abe O. Fine-Tuned Large Language Model for Extracting Patients on Pretreatment for Lung Cancer from a Picture Archiving and Communication System Based on Radiological Reports. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:327-334. [PMID: 38955964 PMCID: PMC11811339 DOI: 10.1007/s10278-024-01186-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2024] [Revised: 06/17/2024] [Accepted: 06/19/2024] [Indexed: 07/04/2024]
Abstract
This study aimed to investigate the performance of a fine-tuned large language model (LLM) in extracting patients on pretreatment for lung cancer from picture archiving and communication systems (PACS) and comparing it with that of radiologists. Patients whose radiological reports contained the term lung cancer (3111 for training, 124 for validation, and 288 for test) were included in this retrospective study. Based on clinical indication and diagnosis sections of the radiological report (used as input data), they were classified into four groups (used as reference data): group 0 (no lung cancer), group 1 (pretreatment lung cancer present), group 2 (after treatment for lung cancer), and group 3 (planning radiation therapy). Using the training and validation datasets, fine-tuning of the pretrained LLM was conducted ten times. Due to group imbalance, group 2 data were undersampled in the training. The performance of the best-performing model in the validation dataset was assessed in the independent test dataset. For testing purposes, two other radiologists (readers 1 and 2) were also involved in classifying radiological reports. The overall accuracy of the fine-tuned LLM, reader 1, and reader 2 was 0.983, 0.969, and 0.969, respectively. The sensitivity for differentiating group 0/1/2/3 by LLM, reader 1, and reader 2 was 1.000/0.948/0.991/1.000, 0.750/0.879/0.996/1.000, and 1.000/0.931/0.978/1.000, respectively. The time required for classification by LLM, reader 1, and reader 2 was 46s/2539s/1538s, respectively. Fine-tuned LLM effectively extracted patients on pretreatment for lung cancer from PACS with comparable performance to radiologists in a shorter time.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan.
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Noriko Kanemaru
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Saori Koshino
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| |
Collapse
|
23
|
Jiang K, Xie Y, Zhang X, Zhang X, Zhou B, Li M, Chen Y, Hu J, Zhang Z, Chen S, Yu K, Qiu C, Zhang X. Fully and Weakly Supervised Deep Learning for Meniscal Injury Classification, and Location Based on MRI. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:191-202. [PMID: 39020156 PMCID: PMC11811310 DOI: 10.1007/s10278-024-01198-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2024] [Revised: 06/14/2024] [Accepted: 07/08/2024] [Indexed: 07/19/2024]
Abstract
Meniscal injury is a common cause of knee joint pain and a precursor to knee osteoarthritis (KOA). The purpose of this study is to develop an automatic pipeline for meniscal injury classification and localization using fully and weakly supervised networks based on MRI images. In this retrospective study, data were from the osteoarthritis initiative (OAI). The MR images were reconstructed using a sagittal intermediate-weighted fat-suppressed turbo spin-echo sequence. (1) We used 130 knees from the OAI to develop the LGSA-UNet model which fuses the features of adjacent slices and adjusts the blocks in Siam to enable the central slice to obtain rich contextual information. (2) One thousand seven hundred and fifty-six knees from the OAI were included to establish segmentation and classification models. The segmentation model achieved a DICE coefficient ranging from 0.84 to 0.93. The AUC values ranged from 0.85 to 0.95 in the binary models. The accuracy for the three types of menisci (normal, tear, and maceration) ranged from 0.60 to 0.88. Furthermore, 206 knees from the orthopedic hospital were used as an external validation data set to evaluate the performance of the model. The segmentation and classification models still performed well on the external validation set. To compare the diagnostic performances between the deep learning (DL) models and radiologists, the external validation sets were sent to two radiologists. The binary classification model outperformed the diagnostic performance of the junior radiologist (0.82-0.87 versus 0.74-0.88). This study highlights the potential of DL in knee meniscus segmentation and injury classification which can help improve diagnostic efficiency.
Collapse
Affiliation(s)
- Kexin Jiang
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics Guangdong Province), 183 Zhongshan Ave W, Guangzhou, 510630, China
| | - Yuhan Xie
- School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou, China
| | - Xintao Zhang
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics Guangdong Province), 183 Zhongshan Ave W, Guangzhou, 510630, China
| | - Xinru Zhang
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics Guangdong Province), 183 Zhongshan Ave W, Guangzhou, 510630, China
| | - Beibei Zhou
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics Guangdong Province), 183 Zhongshan Ave W, Guangzhou, 510630, China
| | - Mianwen Li
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics Guangdong Province), 183 Zhongshan Ave W, Guangzhou, 510630, China
| | - Yanjun Chen
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics Guangdong Province), 183 Zhongshan Ave W, Guangzhou, 510630, China
| | - Jiaping Hu
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics Guangdong Province), 183 Zhongshan Ave W, Guangzhou, 510630, China
| | - Zhiyong Zhang
- School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou, China
| | - Shaolong Chen
- School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou, China
| | - Keyan Yu
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics Guangdong Province), 183 Zhongshan Ave W, Guangzhou, 510630, China
| | - Changzhen Qiu
- School of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou, China.
| | - Xiaodong Zhang
- Department of Medical Imaging, The Third Affiliated Hospital, Southern Medical University (Academy of Orthopedics Guangdong Province), 183 Zhongshan Ave W, Guangzhou, 510630, China.
| |
Collapse
|
24
|
Ma J, Yang H, Chou Y, Yoon J, Allison T, Komandur R, McDunn J, Tasneem A, Do RK, Schwartz LH, Zhao B. Generalizability of lesion detection and segmentation when ScaleNAS is trained on a large multi-organ dataset and validated in the liver. Med Phys 2025; 52:1005-1018. [PMID: 39576046 DOI: 10.1002/mp.17504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2024] [Revised: 09/25/2024] [Accepted: 10/05/2024] [Indexed: 02/04/2025] Open
Abstract
BACKGROUND Tumor assessment through imaging is crucial for diagnosing and treating cancer. Lesions in the liver, a common site for metastatic disease, are particularly challenging to accurately detect and segment. This labor-intensive task is subject to individual variation, which drives interest in automation using artificial intelligence (AI). PURPOSE Evaluate AI for lesion detection and lesion segmentation using CT in the context of human performance on the same task. Use internal testing to determine how an AI-developed model (ScaleNAS) trained on lesions in multiple organs performs when tested specifically on liver lesions in a dataset integrating real-world and clinical trial data. Use external testing to evaluate whether ScaleNAS's performance generalizes to publicly available colorectal liver metastases (CRLM) from The Cancer Imaging Archive (TCIA). METHODS The CUPA study dataset included patients whose CT scan of chest, abdomen, or pelvis at Columbia University between 2010-2020 indicated solid tumors (CUIMC, n = 5011) and from two clinical trials in metastatic colorectal cancer, PRIME (n = 1183) and Amgen (n = 463). Inclusion required ≥1 measurable lesion; exclusion criteria eliminated 1566 patients. Data were divided at the patient level into training (n = 3996), validation (n = 570), and testing (n = 1529) sets. To create the reference standard for training and validation, each case was annotated by one of six radiologists, randomly assigned, who marked the CUPA lesions without access to any previous annotations. For internal testing we refined the CUPA test set to contain only patients who had liver lesions (n = 525) and formed an enhanced reference standard through expert consensus reviewing prior annotations. For external testing, TCIA-CRLM (n = 197) formed the test set. The reference standard for TCIA-CRLM was formed by consensus review of the original annotation and contours by two new radiologists. Metrics for lesion detection were sensitivity and false positives. Lesion segmentation was assessed with median Dice coefficient, under-segmentation ratio (USR), and over-segmentation ratio (OSR). Subgroup analysis examined the influence of lesion size ≥ 10 mm (measurable by RECIST1.1) versus all lesions (important for early identification of disease progression). RESULTS ScaleNAS trained on all lesions achieved sensitivity of 71.4% and Dice of 70.2% for liver lesions in the CUPA internal test set (3,495 lesions) and sensitivity of 68.2% and Dice 64.2% in the TCIA-CRLM external test set (638 lesions). Human radiologists had mean sensitivity of 53.5% and Dice of 73.9% in CUPA and sensitivity of 84.1% and Dice of 88.4% in TCIA-CRLM. Performance improved for ScaleNAS and radiologists in the subgroup of lesions that excluded sub-centimeter lesions. CONCLUSIONS Our study presents the first evaluation of ScaleNAS in medical imaging, demonstrating its liver lesion detection and segmentation performance across diverse datasets. Using consensus reference standards from multiple radiologists, we addressed inter-observer variability and contributed to consistency in lesion annotation. While ScaleNAS does not surpass radiologists in performance, it offers fast and reliable results with potential utility in providing initial contours for radiologists. Future work will extend this model to lung and lymph node lesions, ultimately aiming to enhance clinical applications by generalizing detection and segmentation across tissue types.
Collapse
Affiliation(s)
- Jingchen Ma
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Hao Yang
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Yen Chou
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Fu Jen Catholic University Hospital, Department of Medical Imaging and Fu Jen Catholic University, School of Medicine, New Taipei City, Taiwan
| | - Jin Yoon
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
| | - Tavis Allison
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | | | - Jon McDunn
- Project Data Sphere, Cary, North Carolina, USA
| | | | - Richard K Do
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Lawrence H Schwartz
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| | - Binsheng Zhao
- Department of Radiology, Columbia University Irving Medical Center, New York, New York, USA
- Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, USA
| |
Collapse
|
25
|
Wei W, Jia Y, Li M, Yu N, Dang S, Geng J, Han D, Yu Y, Zheng Y, Fan L. Combining Low-energy Images in Dual-energy Spectral CT With Deep Learning Image Reconstruction Algorithm to Improve Inferior Vena Cava Image Quality. J Comput Assist Tomogr 2025:00004728-990000000-00411. [PMID: 39876519 DOI: 10.1097/rct.0000000000001713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Accepted: 11/11/2024] [Indexed: 01/30/2025]
Abstract
OBJECTIVE To explore the application of low-energy image in dual-energy spectral CT (DEsCT) combined with deep learning image reconstruction (DLIR) to improve inferior vena cava imaging. MATERIALS AND METHODS Thirty patients with inferior vena cava syndrome underwent contrast-enhanced upper abdominal CT with routine dose, and the 40, 50, 60, 70, and 80 keV images in the delayed phase were first reconstructed with the ASiR-V40% algorithm. Image quality was evaluated both quantitatively [CT value, SD, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) for inferior vena cava] and qualitatively to select an optimal energy level with the best image quality. Then, the optimal-energy images were reconstructed again using deep learning image reconstruction medium strength (DLIR-M) and DLIR-H (high strength) algorithms and compared with that of ASiR-V40%. RESULTS The objective CT value, SD, SNR, and CNR increased with the decrease in energy level, with statistically significant differences (all P<0.05). The 40 keV images had the highest CT values, SNR, and CNR and good diagnostic acceptability, and 40 keV was selected as the best energy level. Compared with ASiR-V40% and DLIR-M, DLIR-H had the lowest SD, highest SNR and CNR, and subjective score (all P<0.001) with good consistencies between the 2 physicians (all k ≥0.75). The 40 keV images with DLIR-H had the highest overall image quality, showing sharper edges of inferior vena cava vessels and clearer lumen in patients with Budd-Chiari syndrome. CONCLUSIONS Compared with the ASiR-V algorithm, DLIR-H significantly reduces image noise and provides the highest CNR and best diagnostic image quality for the 40 keV DEsCT images in imaging inferior vena cava.
Collapse
Affiliation(s)
- Wei Wei
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine
| | - Yongjun Jia
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine
| | - Ming Li
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine
| | - Nan Yu
- School of Medical Technology, Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China
| | - Shan Dang
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine
| | - Jian Geng
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine
| | - Dong Han
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine
| | - Yong Yu
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine
- School of Medical Technology, Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China
| | - Yunsong Zheng
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine
- School of Medical Technology, Shaanxi University of Chinese Medicine, Xianyang, Shaanxi, China
| | - Lihua Fan
- Department of Radiology, Affiliated Hospital of Shaanxi University of Chinese Medicine
| |
Collapse
|
26
|
Lanza C, Ascenti V, Amato GV, Pellegrino G, Triggiani S, Tintori J, Intrieri C, Angileri SA, Biondetti P, Carriero S, Torcia P, Ierardi AM, Carrafiello G. All You Need to Know About TACE: A Comprehensive Review of Indications, Techniques, Efficacy, Limits, and Technical Advancement. J Clin Med 2025; 14:314. [PMID: 39860320 PMCID: PMC11766109 DOI: 10.3390/jcm14020314] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2024] [Revised: 12/17/2024] [Accepted: 12/28/2024] [Indexed: 01/27/2025] Open
Abstract
Transcatheter arterial chemoembolization (TACE) is a proven and widely accepted treatment option for hepatocellular carcinoma and it is recommended as first-line non-curative therapy for BCLC B/intermediate HCC (preserved liver function, multifocal, no cancer-related symptoms) in patients without vascular involvement. Different types of TACE are available nowadays, including TAE, c-TACE, DEB-TACE, and DSM-TACE, but at present there is insufficient evidence to recommend one TACE technique over another and the choice is left to the operator. This review then aims to provide a comprehensive overview of the current literature on indications, types of procedures, safety, and efficacy of different TACE treatments.
Collapse
Affiliation(s)
- Carolina Lanza
- Department of Diagnostic and Interventional Radiology, Foundation IRCCS Cà Granda—Ospedale Maggiore Policlinico, Via Francesco Sforza 35, 20122 Milan, Italy; (C.L.); (P.B.); (S.C.); (P.T.); (A.M.I.); (G.C.)
| | - Velio Ascenti
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (V.A.); (G.V.A.); (G.P.); (S.T.); (J.T.)
| | - Gaetano Valerio Amato
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (V.A.); (G.V.A.); (G.P.); (S.T.); (J.T.)
| | - Giuseppe Pellegrino
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (V.A.); (G.V.A.); (G.P.); (S.T.); (J.T.)
| | - Sonia Triggiani
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (V.A.); (G.V.A.); (G.P.); (S.T.); (J.T.)
| | - Jacopo Tintori
- Postgraduate School in Radiodiagnostics, Università degli Studi di Milano, 20122 Milan, Italy; (V.A.); (G.V.A.); (G.P.); (S.T.); (J.T.)
| | - Cristina Intrieri
- Postgraduate School in Diangostic Imaging, Università degli Studi di Siena, 20122 Milan, Italy;
| | - Salvatore Alessio Angileri
- Department of Diagnostic and Interventional Radiology, Foundation IRCCS Cà Granda—Ospedale Maggiore Policlinico, Via Francesco Sforza 35, 20122 Milan, Italy; (C.L.); (P.B.); (S.C.); (P.T.); (A.M.I.); (G.C.)
| | - Pierpaolo Biondetti
- Department of Diagnostic and Interventional Radiology, Foundation IRCCS Cà Granda—Ospedale Maggiore Policlinico, Via Francesco Sforza 35, 20122 Milan, Italy; (C.L.); (P.B.); (S.C.); (P.T.); (A.M.I.); (G.C.)
| | - Serena Carriero
- Department of Diagnostic and Interventional Radiology, Foundation IRCCS Cà Granda—Ospedale Maggiore Policlinico, Via Francesco Sforza 35, 20122 Milan, Italy; (C.L.); (P.B.); (S.C.); (P.T.); (A.M.I.); (G.C.)
| | - Pierluca Torcia
- Department of Diagnostic and Interventional Radiology, Foundation IRCCS Cà Granda—Ospedale Maggiore Policlinico, Via Francesco Sforza 35, 20122 Milan, Italy; (C.L.); (P.B.); (S.C.); (P.T.); (A.M.I.); (G.C.)
| | - Anna Maria Ierardi
- Department of Diagnostic and Interventional Radiology, Foundation IRCCS Cà Granda—Ospedale Maggiore Policlinico, Via Francesco Sforza 35, 20122 Milan, Italy; (C.L.); (P.B.); (S.C.); (P.T.); (A.M.I.); (G.C.)
| | - Gianpaolo Carrafiello
- Department of Diagnostic and Interventional Radiology, Foundation IRCCS Cà Granda—Ospedale Maggiore Policlinico, Via Francesco Sforza 35, 20122 Milan, Italy; (C.L.); (P.B.); (S.C.); (P.T.); (A.M.I.); (G.C.)
- Faculty of Health Science, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy
| |
Collapse
|
27
|
Arkoudis NA, Papadakos SP. Machine learning applications in healthcare clinical practice and research. World J Clin Cases 2025; 13:99744. [PMID: 39764535 PMCID: PMC11577516 DOI: 10.12998/wjcc.v13.i1.99744] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 09/25/2024] [Accepted: 10/15/2024] [Indexed: 11/07/2024] Open
Abstract
Machine learning (ML) is a type of artificial intelligence that assists computers in the acquisition of knowledge through data analysis, thus creating machines that can complete tasks otherwise requiring human intelligence. Among its various applications, it has proven groundbreaking in healthcare as well, both in clinical practice and research. In this editorial, we succinctly introduce ML applications and present a study, featured in the latest issue of the World Journal of Clinical Cases. The authors of this study conducted an analysis using both multiple linear regression (MLR) and ML methods to investigate the significant factors that may impact the estimated glomerular filtration rate in healthy women with and without non-alcoholic fatty liver disease (NAFLD). Their results implicated age as the most important determining factor in both groups, followed by lactic dehydrogenase, uric acid, forced expiratory volume in one second, and albumin. In addition, for the NAFLD- group, the 5th and 6th most important impact factors were thyroid-stimulating hormone and systolic blood pressure, as compared to plasma calcium and body fat for the NAFLD+ group. However, the study's distinctive contribution lies in its adoption of ML methodologies, showcasing their superiority over traditional statistical approaches (herein MLR), thereby highlighting the potential of ML to represent an invaluable advanced adjunct tool in clinical practice and research.
Collapse
Affiliation(s)
- Nikolaos-Achilleas Arkoudis
- Research Unit of Radiology and Medical Imaging, School of Medicine, National and Kapodistrian University of Athens, Athens 11528, Greece
- 2nd Department of Radiology, “Attikon” General University Hospital, Medical School, National and Kapodistrian University of Athens, Chaidari 12462, Greece
| | - Stavros P Papadakos
- Department of Gastroenterology, Laiko General Hospital, National and Kapodistrian University of Athens, Athens 11527, Greece
| |
Collapse
|
28
|
Duyan Yüksel H, Orhan K, Evlice B, Kaya Ö. Evaluation of temporomandibular joint disc displacement with MRI-based radiomics analysis. Dentomaxillofac Radiol 2025; 54:19-27. [PMID: 39602602 DOI: 10.1093/dmfr/twae066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2024] [Revised: 10/16/2024] [Accepted: 11/16/2024] [Indexed: 11/29/2024] Open
Abstract
OBJECTIVES The purpose of this study was to propose a machine learning model and assess its ability to classify temporomandibular joint (TMJ) disc displacements on MR T1-weighted and proton density-weighted images. METHODS This retrospective cohort study included 180 TMJs from 90 patients with TMJ signs and symptoms. A radiomics platform was used to extract imaging features of disc displacements. Thereafter, different machine learning algorithms and logistic regression were implemented on radiomics features for feature selection, classification, and prediction. The radiomics features included first-order statistics, size- and shape-based features, and texture features. Six classifiers, including logistic regression, random forest, decision tree, k-nearest neighbours (KNN), XGBoost, and support vector machine were used for a model building which could predict the TMJ disc displacements. The performance of models was evaluated by sensitivity, specificity, and ROC curve. RESULTS KNN classifier was found to be the most optimal machine learning model for prediction of TMJ disc displacements. The AUC, sensitivity, and specificity for the training set were 0.944, 0.771, 0.918 for normal, anterior disc displacement with reduction (ADDwR) and anterior disc displacement without reduction (ADDwoR) while testing set were 0.913, 0.716, and 1 for normal, ADDwR, and ADDwoR. For TMJ disc displacements, skewness, root mean squared, kurtosis, minimum, large area low grey level emphasis, grey level non-uniformity, and long-run high grey level emphasis, were selected as optimal features. CONCLUSIONS This study has proposed a machine learning model by KNN analysis on TMJ MR images, which can be used for TMJ disc displacements.
Collapse
Affiliation(s)
- Hazal Duyan Yüksel
- Department of Oral Diagnosis and Maxillofacial Radiology, Çukurova University Faculty of Dentistry, Adana, 01380, Türkiye
| | - Kaan Orhan
- Department of Oral Diagnosis and Maxillofacial Radiology, Ankara University Faculty of Dentistry, Ankara, 06500, Türkiye
- Medical Design Application and Research Center (MEDITAM), Ankara University, Ankara, 06800, Türkiye
| | - Burcu Evlice
- Department of Oral Diagnosis and Maxillofacial Radiology, Çukurova University Faculty of Dentistry, Adana, 01380, Türkiye
| | - Ömer Kaya
- Department of Radiology, Cukurova University Faculty of Medicine, Adana, 01380, Türkiye
| |
Collapse
|
29
|
Maletz S, Balagurunathan Y, Murphy K, Folio L, Chima R, Zaheer A, Vadvala H. AI-powered innovations in pancreatitis imaging: a comprehensive literature synthesis. Abdom Radiol (NY) 2025; 50:438-452. [PMID: 39133362 DOI: 10.1007/s00261-024-04512-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 07/16/2024] [Accepted: 07/29/2024] [Indexed: 08/13/2024]
Abstract
Early identification of pancreatitis remains a significant clinical diagnostic challenge that impacts patient outcomes. The evolution of quantitative imaging followed by deep learning models has shown great promise in the non-invasive diagnosis of pancreatitis and its complications. We provide an overview of advancements in diagnostic imaging and quantitative imaging methods along with the evolution of artificial intelligence (AI). In this article, we review the current and future states of methodology and limitations of AI in improving clinical support in the context of early detection and management of pancreatitis.
Collapse
Affiliation(s)
- Sebastian Maletz
- University of South Florida Morsani College of Medicine, Tampa, USA
| | | | - Kade Murphy
- University of South Florida Morsani College of Medicine, Tampa, USA
| | - Les Folio
- University of South Florida Morsani College of Medicine, Tampa, USA
- Moffitt Cancer Center, Tampa, USA
| | - Ranjit Chima
- University of South Florida Morsani College of Medicine, Tampa, USA
- Moffitt Cancer Center, Tampa, USA
| | | | - Harshna Vadvala
- University of South Florida Morsani College of Medicine, Tampa, USA.
- Moffitt Cancer Center, Tampa, USA.
| |
Collapse
|
30
|
Jundaeng J, Chamchong R, Nithikathkul C. Periodontitis diagnosis: A review of current and future trends in artificial intelligence. Technol Health Care 2025; 33:473-484. [PMID: 39302402 DOI: 10.3233/thc-241169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/22/2024]
Abstract
BACKGROUND Artificial intelligence (AI) acts as the state-of-the-art in periodontitis diagnosis in dentistry. Current diagnostic challenges include errors due to a lack of experienced dentists, limited time for radiograph analysis, and mandatory reporting, impacting care quality, cost, and efficiency. OBJECTIVE This review aims to evaluate the current and future trends in AI for diagnosing periodontitis. METHODS A thorough literature review was conducted following PRISMA guidelines. We searched databases including PubMed, Scopus, Wiley Online Library, and ScienceDirect for studies published between January 2018 and December 2023. Keywords used in the search included "artificial intelligence," "panoramic radiograph," "periodontitis," "periodontal disease," and "diagnosis." RESULTS The review included 12 studies from an initial 211 records. These studies used advanced models, particularly convolutional neural networks (CNNs), demonstrating accuracy rates for periodontal bone loss detection ranging from 0.76 to 0.98. Methodologies included deep learning hybrid methods, automated identification systems, and machine learning classifiers, enhancing diagnostic precision and efficiency. CONCLUSIONS Integrating AI innovations in periodontitis diagnosis enhances diagnostic accuracy and efficiency, providing a robust alternative to conventional methods. These technologies offer quicker, less labor-intensive, and more precise alternatives to classical approaches. Future research should focus on improving AI model reliability and generalizability to ensure widespread clinical adoption.
Collapse
Affiliation(s)
- Jarupat Jundaeng
- Health Science Program, Faculty of Medicine, Mahasarakham University, Mahasarakham, Thailand
- Tropical Health Innovation Research Unit, Faculty of Medicine, Mahasarakham University, Mahasarakham, Thailand
- Dental Department, Fang Hospital, Chiangmai, Thailand
| | - Rapeeporn Chamchong
- Department of Computer Science, Faculty of Informatics, Mahasarakham University, Mahasarakham, Thailand
| | - Choosak Nithikathkul
- Health Science Program, Faculty of Medicine, Mahasarakham University, Mahasarakham, Thailand
- Tropical Health Innovation Research Unit, Faculty of Medicine, Mahasarakham University, Mahasarakham, Thailand
| |
Collapse
|
31
|
Yasaka K, Nomura T, Kamohara J, Hirakawa H, Kubo T, Kiryu S, Abe O. Classification of Interventional Radiology Reports into Technique Categories with a Fine-Tuned Large Language Model. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01370-w. [PMID: 39673010 DOI: 10.1007/s10278-024-01370-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2024] [Revised: 11/29/2024] [Accepted: 12/02/2024] [Indexed: 12/15/2024]
Abstract
The aim of this study is to develop a fine-tuned large language model that classifies interventional radiology reports into technique categories and to compare its performance with readers. This retrospective study included 3198 patients (1758 males and 1440 females; age, 62.8 ± 16.8 years) who underwent interventional radiology from January 2018 to July 2024. Training, validation, and test datasets involved 2292, 250, and 656 patients, respectively. Input data involved texts in clinical indication, imaging diagnosis, and image-finding sections of interventional radiology reports. Manually classified technique categories (15 categories in total) were utilized as reference data. Fine-tuning of the Bidirectional Encoder Representations model was performed using training and validation datasets. This process was repeated 15 times due to the randomness of the learning process. The best-performed model, which showed the highest accuracy among 15 trials, was selected to further evaluate its performance in the independent test dataset. The report classification involved one radiologist (reader 1) and two radiology residents (readers 2 and 3). The accuracy and macrosensitivity (average of each category's sensitivity) of the best-performed model in the validation dataset were 0.996 and 0.994, respectively. For the test dataset, the accuracy/macrosensitivity were 0.988/0.980, 0.986/0.977, 0.989/0.979, and 0.988/0.980 in the best model, reader 1, reader 2, and reader 3, respectively. The model required 0.178 s required for classification per patient, which was 17.5-19.9 times faster than readers. In conclusion, fine-tuned large language model classified interventional radiology reports into technique categories with high accuracy similar to readers within a remarkably shorter time.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan.
| | - Takuto Nomura
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Jun Kamohara
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Hiroshi Hirakawa
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Takatoshi Kubo
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Osamu Abe
- Department of Radiology, The University of Tokyo Hospital, 7-3-1 Hongo, Bunkyo-Ku, Tokyo, 113-8655, Japan
| |
Collapse
|
32
|
Zhu L, Shi B, Ding B, Xia Y, Wang K, Feng W, Dai J, Xu T, Wang B, Yuan F, Shen H, Dong H, Zhang H. Accelerated T2W Imaging with Deep Learning Reconstruction in Staging Rectal Cancer: A Preliminary Study. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024:10.1007/s10278-024-01345-x. [PMID: 39663320 DOI: 10.1007/s10278-024-01345-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 11/15/2024] [Accepted: 11/18/2024] [Indexed: 12/13/2024]
Abstract
Deep learning reconstruction (DLR) has exhibited potential in saving scan time. There is limited research on the evaluation of accelerated acquisition with DLR in staging rectal cancers. Our first objective was to explore the best DLR level in saving time through phantom experiments. Resolution and number of excitations (NEX) adjusted for different scan time, image quality of conventionally reconstructed T2W images were measured and compared with images reconstructed with different DLR level. The second objective was to explore the feasibility of accelerated T2W imaging with DLR in image quality and diagnostic performance for rectal cancer patients. 52 patients were prospectively enrolled to undergo accelerated acquisition reconstructed with highly-denoised DLR (DLR_H40sec) and conventional reconstruction (ConR2min). The image quality and diagnostic performance were evaluated by observers with varying experience and compared between protocols using κ statistics and area under the receiver operating characteristic curve (AUC). The phantom experiments demonstrated that DLR_H could achieve superior signal-to-noise ratio (SNR), detail conspicuity, sharpness, and less distortion within the least scan time. The DLR_H40sec images exhibited higher sharpness and SNR than ConR2min. The agreements with pathological TN-stages were improved using DLR_H40sec images compared to ConR2min (T: 0.846vs. 0.771, 0.825vs. 0.700, and 0.697vs. 0.512; N: 0.527vs. 0.521, 0.421vs. 0.348 and 0.517vs. 0.363 for junior, intermediate, and senior observes, respectively). Comparable AUCs to identify T3-4 and N1-2 tumors were achieved using DLR_H40sec and ConR2min images (P > 0.05). Consequently, with 2/3-time reduction, DLR_H40sec images showed improved image quality and comparable TN-staging performance to conventional T2W imaging for rectal cancer patients.
Collapse
Affiliation(s)
- Lan Zhu
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University of Medicine, No.197 Ruijin Er Road, Shanghai, 200025, China
| | - Bowen Shi
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University of Medicine, No.197 Ruijin Er Road, Shanghai, 200025, China
| | - Bei Ding
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University of Medicine, No.197 Ruijin Er Road, Shanghai, 200025, China
| | - Yihan Xia
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University of Medicine, No.197 Ruijin Er Road, Shanghai, 200025, China
| | - Kangning Wang
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University of Medicine, No.197 Ruijin Er Road, Shanghai, 200025, China
| | - Weiming Feng
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University of Medicine, No.197 Ruijin Er Road, Shanghai, 200025, China
| | - Jiankun Dai
- Department of MR, GE Healthcare, Beijing, China
| | - Tianyong Xu
- Department of MR, GE Healthcare, Beijing, China
| | - Baisong Wang
- Department of Biomedical Statistics, Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Fei Yuan
- Department of Pathology, Ruijin Hospital, Shanghai Jiao Tong University of Medicine, No.197 Ruijin Er Road, Shanghai, 200025, China
| | - Hailin Shen
- Department of Radiology, Suzhou Kowloon Hospital, Shanghai Jiao Tong University of Medicine, No.118 Wansheng Street, Suzhou Industrial Park, Suzhou City, 215028, Jiangsu Province, China.
| | - Haipeng Dong
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University of Medicine, No.197 Ruijin Er Road, Shanghai, 200025, China.
| | - Huan Zhang
- Department of Radiology, Ruijin Hospital, Shanghai Jiao Tong University of Medicine, No.197 Ruijin Er Road, Shanghai, 200025, China.
| |
Collapse
|
33
|
Yasaka K, Kanzawa J, Nakaya M, Kurokawa R, Tajima T, Akai H, Yoshioka N, Akahane M, Ohtomo K, Abe O, Kiryu S. Super-resolution Deep Learning Reconstruction for 3D Brain MR Imaging: Improvement of Cranial Nerve Depiction and Interobserver Agreement in Evaluations of Neurovascular Conflict. Acad Radiol 2024; 31:5118-5127. [PMID: 38897913 DOI: 10.1016/j.acra.2024.06.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/28/2024] [Accepted: 06/04/2024] [Indexed: 06/21/2024]
Abstract
RATIONALE AND OBJECTIVES To determine if super-resolution deep learning reconstruction (SR-DLR) improves the depiction of cranial nerves and interobserver agreement when assessing neurovascular conflict in 3D fast asymmetric spin echo (3D FASE) brain MR images, as compared to deep learning reconstruction (DLR). MATERIALS AND METHODS This retrospective study involved reconstructing 3D FASE MR images of the brain for 37 patients using SR-DLR and DLR. Three blinded readers conducted qualitative image analyses, evaluating the degree of neurovascular conflict, structure depiction, sharpness, noise, and diagnostic acceptability. Quantitative analyses included measuring edge rise distance (ERD), edge rise slope (ERS), and full width at half maximum (FWHM) using the signal intensity profile along a linear region of interest across the center of the basilar artery. RESULTS Interobserver agreement on the degree of neurovascular conflict of the facial nerve was generally higher with SR-DLR (0.429-0.923) compared to DLR (0.175-0.689). SR-DLR exhibited increased subjective image noise compared to DLR (p ≥ 0.008). However, all three readers found SR-DLR significantly superior in terms of sharpness (p < 0.001); cranial nerve depiction, particularly of facial and acoustic nerves, as well as the osseous spiral lamina (p < 0.001); and diagnostic acceptability (p ≤ 0.002). The FWHM (mm)/ERD (mm)/ERS (mm-1) for SR-DLR and DLR was 3.1-4.3/0.9-1.1/8795.5-10,703.5 and 3.3-4.8/1.4-2.1/5157.9-7705.8, respectively, with SR-DLR's image sharpness being significantly superior (p ≤ 0.001). CONCLUSION SR-DLR enhances image sharpness, leading to improved cranial nerve depiction and a tendency for greater interobserver agreement regarding facial nerve neurovascular conflict.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan; Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan
| | - Jun Kanzawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Moto Nakaya
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Ryo Kurokawa
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-ku, Tokyo 108-8329, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan; Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo 108-8639, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba 286-0124, Japan.
| |
Collapse
|
34
|
Yasaka K, Akai H, Kato S, Tajima T, Yoshioka N, Furuta T, Kageyama H, Toda Y, Akahane M, Ohtomo K, Abe O, Kiryu S. Iterative Motion Correction Technique with Deep Learning Reconstruction for Brain MRI: A Volunteer and Patient Study. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2024; 37:3070-3076. [PMID: 38942939 PMCID: PMC11612051 DOI: 10.1007/s10278-024-01184-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Revised: 06/03/2024] [Accepted: 06/18/2024] [Indexed: 06/30/2024]
Abstract
The aim of this study was to investigate the effect of iterative motion correction (IMC) on reducing artifacts in brain magnetic resonance imaging (MRI) with deep learning reconstruction (DLR). The study included 10 volunteers (between September 2023 and December 2023) and 30 patients (between June 2022 and July 2022) for quantitative and qualitative analyses, respectively. Volunteers were instructed to remain still during the first MRI with fluid-attenuated inversion recovery sequence (FLAIR) and to move during the second scan. IMCoff DLR images were reconstructed from the raw data of the former acquisition; IMCon and IMCoff DLR images were reconstructed from the latter acquisition. After registration of the motion images, the structural similarity index measure (SSIM) was calculated using motionless images as reference. For qualitative analyses, IMCon and IMCoff FLAIR DLR images of the patients were reconstructed and evaluated by three blinded readers in terms of motion artifacts, noise, and overall quality. SSIM for IMCon images was 0.952, higher than that for IMCoff images (0.949) (p < 0.001). In qualitative analyses, although noise in IMCon images was rated as increased by two of the three readers (both p < 0.001), all readers agreed that motion artifacts and overall quality were significantly better in IMCon images than in IMCoff images (all p < 0.001). In conclusion, IMC reduced motion artifacts in brain FLAIR DLR images while maintaining similarity to motionless images.
Collapse
Affiliation(s)
- Koichiro Yasaka
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Hiroyuki Akai
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Shimpei Kato
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Taku Tajima
- Department of Radiology, International University of Health and Welfare Mita Hospital, 1-4-3 Mita, Minato-ku, Tokyo, 108-8329, Japan
| | - Naoki Yoshioka
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Toshihiro Furuta
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Hajime Kageyama
- Department of Radiology, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokanedai, Minato-ku, Tokyo, 108-8639, Japan
| | - Yui Toda
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Masaaki Akahane
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan
| | - Kuni Ohtomo
- International University of Health and Welfare, 2600-1 Ktiakanemaru, Ohtawara, Tochigi, 324-8501, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Shigeru Kiryu
- Department of Radiology, International University of Health and Welfare Narita Hospital, 852 Hatakeda, Narita, Chiba, 286-0124, Japan.
| |
Collapse
|
35
|
Le EPV, Wong MYZ, Rundo L, Tarkin JM, Evans NR, Weir-McCall JR, Chowdhury MM, Coughlin PA, Pavey H, Zaccagna F, Wall C, Sriranjan R, Corovic A, Huang Y, Warburton EA, Sala E, Roberts M, Schönlieb CB, Rudd JHF. Using machine learning to predict carotid artery symptoms from CT angiography: A radiomics and deep learning approach. Eur J Radiol Open 2024; 13:100594. [PMID: 39280120 PMCID: PMC11402422 DOI: 10.1016/j.ejro.2024.100594] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 07/20/2024] [Accepted: 08/04/2024] [Indexed: 09/18/2024] Open
Abstract
Purpose To assess radiomics and deep learning (DL) methods in identifying symptomatic Carotid Artery Disease (CAD) from carotid CT angiography (CTA) images. We further compare the performance of these novel methods to the conventional calcium score. Methods Carotid CT angiography (CTA) images from symptomatic patients (ischaemic stroke/transient ischaemic attack within the last 3 months) and asymptomatic patients were analysed. Carotid arteries were classified into culprit, non-culprit and asymptomatic. The calcium score was assessed using the Agatston method. 93 radiomic features were extracted from regions-of-interest drawn on 14 consecutive CTA slices. For DL, convolutional neural networks (CNNs) with and without transfer learning were trained directly on CTA slices. Predictive performance was assessed over 5-fold cross validated AUC scores. SHAP and GRAD-CAM algorithms were used for explainability. Results 132 carotid arteries were analysed (41 culprit, 41 non-culprit, and 50 asymptomatic). For asymptomatic vs symptomatic arteries, radiomics attained a mean AUC of 0.96(± 0.02), followed by DL 0.86(± 0.06) and then calcium 0.79(± 0.08). For culprit vs non-culprit arteries, radiomics achieved a mean AUC of 0.75(± 0.09), followed by DL 0.67(± 0.10) and then calcium 0.60(± 0.02). For multi-class classification, the mean AUCs were 0.95(± 0.07), 0.79(± 0.05), and 0.71(± 0.07) for radiomics, DL and calcium, respectively. Explainability revealed consistent patterns in the most important radiomic features. Conclusions Our study highlights the potential of novel image analysis techniques in extracting quantitative information beyond calcification in the identification of CAD. Though further work is required, the transition of these novel techniques into clinical practice may eventually facilitate better stroke risk stratification.
Collapse
Affiliation(s)
| | - Mark Y Z Wong
- Department of Medicine, University of Cambridge, United Kingdom
| | - Leonardo Rundo
- Department of Radiology, University of Cambridge, United Kingdom
- Cancer Research UK Cambridge Centre, University of Cambridge, United Kingdom
- Department of Information and Electrical Engineering and Applied Mathematics (DIEM), University of Salerno, Italy
| | - Jason M Tarkin
- Department of Medicine, University of Cambridge, United Kingdom
| | - Nicholas R Evans
- Department of Clinical Neurosciences, University of Cambridge, United Kingdom
| | - Jonathan R Weir-McCall
- Department of Radiology, University of Cambridge, United Kingdom
- Department of Radiology, Royal Papworth Hospital, Cambridge, UK
| | - Mohammed M Chowdhury
- Division of Vascular Surgery, Department of Surgery, University of Cambridge, United Kingdom
| | | | - Holly Pavey
- Division of Experimental Medicine and Immunotherapeutics, University of Cambridge, United Kingdom
| | - Fulvio Zaccagna
- Department of Radiology, University of Cambridge, United Kingdom
- Department of Imaging, Cambridge University Hospitals NHS Foundation Trust, Cambridge Biomedical Campus, Cambridge, United Kingdom
- Investigative Medicine Division, Radcliffe Department of Medicine, University of Oxford, Oxford, United Kingdom
| | - Chris Wall
- Department of Medicine, University of Cambridge, United Kingdom
| | | | - Andrej Corovic
- Department of Medicine, University of Cambridge, United Kingdom
| | - Yuan Huang
- Department of Medicine, University of Cambridge, United Kingdom
- Department of Radiology, University of Cambridge, United Kingdom
- EPSRC Centre for Mathematical Imaging in Healthcare, University of Cambridge, United Kingdom
| | | | - Evis Sala
- Dipartimento di Scienze Radiologiche ed Ematologiche, Università Cattolica del Sacro Cuore, Rome, Italy
- Dipartimento Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Policlinico Universitario A. Gemelli IRCCS, Rome, Italy
| | - Michael Roberts
- Department of Medicine, University of Cambridge, United Kingdom
- EPSRC Centre for Mathematical Imaging in Healthcare, University of Cambridge, United Kingdom
- Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom
| | | | - James H F Rudd
- Department of Medicine, University of Cambridge, United Kingdom
- EPSRC Centre for Mathematical Imaging in Healthcare, University of Cambridge, United Kingdom
| |
Collapse
|
36
|
Durmuş MA, Kömeç S, Gülmez A. Artificial intelligence applications for immunology laboratory: image analysis and classification study of IIF photos. Immunol Res 2024; 72:1277-1287. [PMID: 39107556 DOI: 10.1007/s12026-024-09527-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2024] [Accepted: 08/01/2024] [Indexed: 02/06/2025]
Abstract
Artificial intelligence (AI) is increasingly being used in medicine to enhance the speed and accuracy of disease diagnosis and treatment. AI-based image analysis is expected to play a crucial role in future healthcare facilities and laboratories, offering improved precision and cost-effectiveness. As technology advances, the requirement for specialized software knowledge to utilize AI applications is diminishing. Our study will examine the advantages and challenges of employing AI-based image analysis in the field of immunology and will investigate whether physicians without software expertise can use MS Azure Portal for ANA IIF test classification and image analysis. This is the first study to perform Hep-2 image analysis using MS Azure Portal. We will also assess the potential for AI applications to aid physicians in interpreting ANA IIF results in immunology laboratories. The study was designed in four stages by two specialists. Stage 1: creation of an image library, Stage 2: finding an artificial intelligence application, Stage 3: uploading images and training artificial intelligence, Stage 4: performance analysis of the artificial intelligence application. In the first training, the average pattern identification accuracy for 72 testing images was 81.94%. After the second training, this accuracy increased to 87.5%. Patterns Precision improved from 71.42 to 79.96% after the second training. As a result, the number of correctly identified patterns and their accuracy increased with the second training process. Artificial intelligence-based image analysis shows promising potential. This technology is expected to become essential in healthcare facility laboratories, offering higher accuracy rates and lower costs.
Collapse
Affiliation(s)
- Mehmet Akif Durmuş
- Medical Microbiology Laboratory, Çam and Sakura City Hospital, Istanbul, Türkiye.
| | - Selda Kömeç
- Medical Microbiology Laboratory, Çam and Sakura City Hospital, Istanbul, Türkiye
| | - Abdurrahman Gülmez
- Medical Microbiology Laboratory, Aydın Atatürk State Hospital, Aydın, Türkiye
| |
Collapse
|
37
|
Ren L, Chen DB, Yan X, She S, Yang Y, Zhang X, Liao W, Chen H. Bridging the Gap Between Imaging and Molecular Characterization: Current Understanding of Radiomics and Radiogenomics in Hepatocellular Carcinoma. J Hepatocell Carcinoma 2024; 11:2359-2372. [PMID: 39619602 PMCID: PMC11608547 DOI: 10.2147/jhc.s423549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Accepted: 11/19/2024] [Indexed: 01/04/2025] Open
Abstract
Hepatocellular carcinoma (HCC) is the sixth most common malignancy worldwide and the third leading cause of cancer-related deaths. Imaging plays a crucial role in the screening, diagnosis, and monitoring of HCC; however, the potential mechanism regarding phenotypes or molecular subtyping remains underexplored. Radiomics significantly expands the selection of features available by extracting quantitative features from imaging data. Radiogenomics bridges the gap between imaging and genetic/transcriptomic information by associating imaging features with critical genes and pathways, thereby providing biological annotations to these features. Despite challenges in interpreting these connections, assessing their universality, and considering the diversity in HCC etiology and genetic information across different populations, radiomics and radiogenomics offer new perspectives for precision treatment in HCC. This article provides an up-to-date summary of the advancements in radiomics and radiogenomics throughout the HCC care continuum, focusing on the clinical applications, advantages, and limitations of current techniques and offering prospects. Future research should aim to overcome these challenges to improve the prognosis of HCC patients and leverage imaging information for patient benefit.
Collapse
Affiliation(s)
- Liying Ren
- Peking University People’s Hospital, Peking University Hepatology Institute, Infectious Disease and Hepatology Center of Peking University People’s Hospital, Beijing Key Laboratory of Hepatitis C and Immunotherapy for Liver Diseases, Beijing International Cooperation Base for Science and Technology on NAFLD Diagnosis, Beijing, 100044, People’s Republic of China
| | - Dong Bo Chen
- Peking University People’s Hospital, Peking University Hepatology Institute, Infectious Disease and Hepatology Center of Peking University People’s Hospital, Beijing Key Laboratory of Hepatitis C and Immunotherapy for Liver Diseases, Beijing International Cooperation Base for Science and Technology on NAFLD Diagnosis, Beijing, 100044, People’s Republic of China
| | - Xuanzhi Yan
- Laboratory of Hepatobiliary and Pancreatic Surgery, Affiliated Hospital of Guilin Medical University, Guilin, Guangxi, 541001, People’s Republic of China
| | - Shaoping She
- Peking University People’s Hospital, Peking University Hepatology Institute, Infectious Disease and Hepatology Center of Peking University People’s Hospital, Beijing Key Laboratory of Hepatitis C and Immunotherapy for Liver Diseases, Beijing International Cooperation Base for Science and Technology on NAFLD Diagnosis, Beijing, 100044, People’s Republic of China
| | - Yao Yang
- Peking University People’s Hospital, Peking University Hepatology Institute, Infectious Disease and Hepatology Center of Peking University People’s Hospital, Beijing Key Laboratory of Hepatitis C and Immunotherapy for Liver Diseases, Beijing International Cooperation Base for Science and Technology on NAFLD Diagnosis, Beijing, 100044, People’s Republic of China
| | - Xue Zhang
- Peking University People’s Hospital, Peking University Hepatology Institute, Infectious Disease and Hepatology Center of Peking University People’s Hospital, Beijing Key Laboratory of Hepatitis C and Immunotherapy for Liver Diseases, Beijing International Cooperation Base for Science and Technology on NAFLD Diagnosis, Beijing, 100044, People’s Republic of China
| | - Weijia Liao
- Laboratory of Hepatobiliary and Pancreatic Surgery, Affiliated Hospital of Guilin Medical University, Guilin, Guangxi, 541001, People’s Republic of China
| | - Hongsong Chen
- Peking University People’s Hospital, Peking University Hepatology Institute, Infectious Disease and Hepatology Center of Peking University People’s Hospital, Beijing Key Laboratory of Hepatitis C and Immunotherapy for Liver Diseases, Beijing International Cooperation Base for Science and Technology on NAFLD Diagnosis, Beijing, 100044, People’s Republic of China
| |
Collapse
|
38
|
Wang H, Ying J, Liu J, Yu T, Huang D. Leveraging 3D Convolutional Neural Networks for Accurate Recognition and Localization of Ankle Fractures. Ther Clin Risk Manag 2024; 20:761-773. [PMID: 39584042 PMCID: PMC11585985 DOI: 10.2147/tcrm.s483907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2024] [Accepted: 10/29/2024] [Indexed: 11/26/2024] Open
Abstract
Background Ankle fractures are common injuries with substantial implications for patient mobility and quality of life. Traditional imaging methods, while standard, have limitations in detecting subtle fractures and distinguishing them from complex bone structures. The advent of 3D Convolutional Neural Networks (3D-CNNs) offers a promising avenue for enhancing the accuracy and reliability of ankle fracture diagnoses. Methods In this study, we acquired 1453 high-resolution CT scans and processed them through three distinct 3D-CNN models: 3D-Mobilenet, 3D-Resnet101, and 3D-EfficientNetB7. Our approach involved meticulous preprocessing of images, including normalization and resampling, followed by a systematic comparative evaluation of the models based on accuracy, Area Under the Curve (AUC), and recall metrics. Additionally, the integration of Gradient-weighted Class Activation Mapping (Grad-CAM) provided visual interpretability of the models' predictive focus points. Results The 3D-EfficientNetB7 model outperformed the other models, achieving an accuracy of 0.91 and an AUC of 0.94 after 20 training epochs. It demonstrated particularly effective in the accurate detection and localization of subtle and complex fractures. Grad-CAM visualizations confirmed the model's focus on clinically relevant areas, aligning with expert assessments and enhancing trust in automated diagnostics. Spatial localization techniques were pivotal in improving interpretability, offering clear visual guidance for pinpointing fracture sites. Conclusion Our findings highlight the effectiveness of the 3D-EfficientNetB7 model in diagnosing ankle fractures, supported by robust performance metrics and enhanced visualization tools.
Collapse
Affiliation(s)
- Hua Wang
- Department of Medical Imaging, Ningbo No.6 hospital, Ningbo, People’s Republic of China
| | - Jichong Ying
- Department of Orthopedics, Ningbo No.6 hospital, Ningbo, People’s Republic of China
| | - Jianlei Liu
- Department of Orthopedics, Ningbo No.6 hospital, Ningbo, People’s Republic of China
| | - Tianming Yu
- Department of Orthopedics, Ningbo No.6 hospital, Ningbo, People’s Republic of China
| | - Dichao Huang
- Department of Orthopedics, Ningbo No.6 hospital, Ningbo, People’s Republic of China
| |
Collapse
|
39
|
Martin CJ, Kortesniemi MK, Sutton DG, Applegate K, Vassileva J. A strategy for achieving optimisation of radiological protection in digital radiology proposed by ICRP. JOURNAL OF RADIOLOGICAL PROTECTION : OFFICIAL JOURNAL OF THE SOCIETY FOR RADIOLOGICAL PROTECTION 2024; 44:041511. [PMID: 39555658 DOI: 10.1088/1361-6498/ad60d1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Accepted: 07/09/2024] [Indexed: 11/19/2024]
Abstract
Radiology is now predominantly a digital medium and this has extended the flexibility, efficiency and application of medical imaging. Achieving the full benefit of digital radiology requires images to be of sufficient quality to make a reliable diagnosis for each patient, while minimising risks from radiation exposure, and so involves a careful balance between competing objectives. When an optimisation programme is undertaken, a knowledge of patient doses from surveys can be valuable in identifying areas needing attention. However, any dose reduction measures must not degrade image quality to the extent that it is inadequate for the clinical purpose. The move to digital imaging has enabled versatile image acquisition and presentation, including multi-modality display and quantitative assessment, with post-processing options that adjust for optimal viewing. This means that the appearance of an image is unlikely to give any indication when the dose is higher than necessary. Moreover, options to improve performance of imaging equipment add to its complexity, so operators require extensive training to be able to achieve this. Optimisation is a continuous rather than single stage process that requires regular monitoring, review, and analysis of performance feeding into improvement and development of imaging protocols. The ICRP is in the process of publishing two reports about optimisation in digital radiology. The first report sets out components needed to ensure that a radiology service can carry optimisation through. It describes how imaging professionals should work together as a team and explains the benefits of having appropriate methodologies to monitor performance, together with the knowledge and expertise required to use them effectively. It emphasises the need for development of organisational processes that ensure tasks are carried out. The second ICRP report deals with practical requirements for optimisation of different digital radiology modalities, and builds on information provided in earlier modality specific ICRP publications.
Collapse
Affiliation(s)
- Colin J Martin
- Department of Clinical Physics and Bio-engineering, University of Glasgow, Glasgow, United Kingdom
| | | | - David G Sutton
- Medical Physics, University of Dundee, Dundee, United Kingdom
| | | | - Jenia Vassileva
- International Atomic Energy Agency, Vienna International Centre, 1400 Vienna, Austria
| |
Collapse
|
40
|
Fussell DA, Tang CC, Sternhagen J, Marrey VV, Roman KM, Johnson J, Head MJ, Troutt HR, Li CH, Chang PD, Joseph J, Chow DS. Artificial Intelligence Efficacy as a Function of Trainee Interpreter Proficiency: Lessons from a Randomized Controlled Trial. AJNR Am J Neuroradiol 2024; 45:1647-1654. [PMID: 38906673 PMCID: PMC11543080 DOI: 10.3174/ajnr.a8387] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2024] [Accepted: 06/13/2024] [Indexed: 06/23/2024]
Abstract
BACKGROUND AND PURPOSE Recently, artificial intelligence tools have been deployed with increasing speed in educational and clinical settings. However, the use of artificial intelligence by trainees across different levels of experience has not been well-studied. This study investigates the impact of artificial intelligence assistance on the diagnostic accuracy for intracranial hemorrhage and large-vessel occlusion by medical students and resident trainees. MATERIALS AND METHODS This prospective study was conducted between March 2023 and October 2023. Medical students and resident trainees were asked to identify intracranial hemorrhage and large-vessel occlusion in 100 noncontrast head CTs and 100 head CTAs, respectively. One group received diagnostic aid simulating artificial intelligence for intracranial hemorrhage only (n = 26); the other, for large-vessel occlusion only (n = 28). Primary outcomes included accuracy, sensitivity, and specificity for intracranial hemorrhage/large-vessel occlusion detection without and with aid. Study interpretation time was a secondary outcome. Individual responses were pooled and analyzed with the t test; differences in continuous variables were assessed with ANOVA. RESULTS Forty-eight participants completed the study, generating 10,779 intracranial hemorrhage or large-vessel occlusion interpretations. With diagnostic aid, medical student accuracy improved 11.0 points (P < .001) and resident trainee accuracy showed no significant change. Intracranial hemorrhage interpretation time increased with diagnostic aid for both groups (P < .001), while large-vessel occlusion interpretation time decreased for medical students (P < .001). Despite worse performance in the detection of the smallest-versus-largest hemorrhages at baseline, medical students were not more likely to accept a true-positive artificial intelligence result for these more difficult tasks. Both groups were considerably less accurate when disagreeing with the artificial intelligence or when supplied with an incorrect artificial intelligence result. CONCLUSIONS This study demonstrated greater improvement in diagnostic accuracy with artificial intelligence for medical students compared with resident trainees. However, medical students were less likely than resident trainees to overrule incorrect artificial intelligence interpretations and were less accurate, even with diagnostic aid, than the artificial intelligence was by itself.
Collapse
Affiliation(s)
- David A Fussell
- From the Department of Radiological Sciences (D.A.F., C.C.T., J.S., V.V.M., J.J., H.R.T., C.H.L., P.D.C., D.S.C.), University of California, Irvine, Irvine, California
| | - Cynthia C Tang
- From the Department of Radiological Sciences (D.A.F., C.C.T., J.S., V.V.M., J.J., H.R.T., C.H.L., P.D.C., D.S.C.), University of California, Irvine, Irvine, California
| | - Jake Sternhagen
- From the Department of Radiological Sciences (D.A.F., C.C.T., J.S., V.V.M., J.J., H.R.T., C.H.L., P.D.C., D.S.C.), University of California, Irvine, Irvine, California
| | - Varun V Marrey
- From the Department of Radiological Sciences (D.A.F., C.C.T., J.S., V.V.M., J.J., H.R.T., C.H.L., P.D.C., D.S.C.), University of California, Irvine, Irvine, California
| | - Kelsey M Roman
- School of Medicine (K.M.R., M.J.H.), University of California, Irvine, Irvine, California
| | - Jeremy Johnson
- From the Department of Radiological Sciences (D.A.F., C.C.T., J.S., V.V.M., J.J., H.R.T., C.H.L., P.D.C., D.S.C.), University of California, Irvine, Irvine, California
| | - Michael J Head
- School of Medicine (K.M.R., M.J.H.), University of California, Irvine, Irvine, California
| | - Hayden R Troutt
- From the Department of Radiological Sciences (D.A.F., C.C.T., J.S., V.V.M., J.J., H.R.T., C.H.L., P.D.C., D.S.C.), University of California, Irvine, Irvine, California
| | - Charles H Li
- From the Department of Radiological Sciences (D.A.F., C.C.T., J.S., V.V.M., J.J., H.R.T., C.H.L., P.D.C., D.S.C.), University of California, Irvine, Irvine, California
| | - Peter D Chang
- From the Department of Radiological Sciences (D.A.F., C.C.T., J.S., V.V.M., J.J., H.R.T., C.H.L., P.D.C., D.S.C.), University of California, Irvine, Irvine, California
| | - John Joseph
- Paul Merage School of Business (J.J.), University of California, Irvine, Irvine, California
| | - Daniel S Chow
- From the Department of Radiological Sciences (D.A.F., C.C.T., J.S., V.V.M., J.J., H.R.T., C.H.L., P.D.C., D.S.C.), University of California, Irvine, Irvine, California
| |
Collapse
|
41
|
Finkelstein M, Ludwig K, Kamath A, Halton KP, Mendelson DS. The Impact of an Artificial Intelligence Certificate Program on Radiology Resident Education. Acad Radiol 2024; 31:4709-4714. [PMID: 38906781 DOI: 10.1016/j.acra.2024.05.041] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 04/26/2024] [Accepted: 05/07/2024] [Indexed: 06/23/2024]
Abstract
RATIONALE AND OBJECTIVES The objective of this study was to evaluate the effectiveness of a pilot artificial intelligence (AI) certificate program in aiding radiology trainees to develop an understanding of the evolving role and application of artificial intelligence in radiology. A secondary objective was set to determine the background of residents that would most benefit from such training. MATERIALS AND METHODS This was a prospective pilot study involving 42 radiology residents at two separate residency programs who participated in the Radiological Society of North America Imaging AI Foundational Certificate course over a four-month period. The course consisted of 6 online modules that contained didactic lectures followed by end-of-module quizzes to assess knowledge gained from these lectures. Pre- and post-course assessments were conducted to evaluate the residents' knowledge and skills in AI. Additionally, a post-course survey was performed to assess participants' overall satisfaction with the course. RESULTS All participating residents completed the certificate program. The mean pre-course assessment score was 37 %, which increased to 73 % after completing the modules (p < 0.001). 74 % (31/42) endorsed the belief the course improved familiarity with artificial intelligence in radiology. Residency program, residency year, and reported prior familiarity with AI were not found to influence pre-course score, post-course score, nor score improvement. 57 % (24/42) endorsed interest in pursuing further certification in AI. CONCLUSION Our pilot study suggests that a certificate course can effectively enhance the knowledge and skills of radiology residents in the application of AI in radiology. The benefits of such a course can be found regardless of program, resident year, and self-reported prior resident understanding of radiology in AI.
Collapse
Affiliation(s)
- Mark Finkelstein
- Icahn School of Medicine at Mount Sinai, New York, NY (M.F., A.K., K.P.H., D.S.M.); New York University Langone Medical Center (M.F.).
| | | | - Amita Kamath
- Icahn School of Medicine at Mount Sinai, New York, NY (M.F., A.K., K.P.H., D.S.M.)
| | - Kathleen P Halton
- Icahn School of Medicine at Mount Sinai, New York, NY (M.F., A.K., K.P.H., D.S.M.)
| | - David S Mendelson
- Icahn School of Medicine at Mount Sinai, New York, NY (M.F., A.K., K.P.H., D.S.M.)
| |
Collapse
|
42
|
Takahashi M, Goto A, Hisaeda K, Inoue Y, Inaba T. Deep-learning classification of teat-end conditions in Holstein cattle. Res Vet Sci 2024; 180:105434. [PMID: 39401476 DOI: 10.1016/j.rvsc.2024.105434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2024] [Revised: 10/06/2024] [Accepted: 10/08/2024] [Indexed: 11/17/2024]
Abstract
As a means of preventing mastitis, deep learning for classifying teat-end conditions in dairy cows has not yet been optimized. By using 1426 digital images of dairy cow udders, the extent of teat-end hyperkeratosis was assessed using a four-point scale. Several deep-learning networks based on the transfer learning approach have been used to evaluate the conditions of the teat ends displayed in the digital images. The images of the teat ends were partitioned into training (70 %) and validation datasets (15 %); afterwards, the network was evaluated based on the remaining test dataset (15 %). The results demonstrated that eight different ImageNet models consistently achieved high accuracy (80.3-86.6 %). The areas under the receiver operating characteristic curves for the normal, smooth, rough, and very rough classification scores in the test data set ranged from 0.825 to 0.999. Thus, improved accuracy in image-based classification of teat tissue conditions in dairy cattle using deep learning requires more training images. This method could help farmers reduce the risks of intramammary infections, decrease the use of antimicrobials, and better manage costs associated with mastitis detection and treatment.
Collapse
Affiliation(s)
- Miho Takahashi
- Department of Veterinary Medicine, Faculty of Veterinary Medicine, Okayama University of Science, Ehime 794-0085, Japan
| | - Akira Goto
- Department of Veterinary Medicine, Faculty of Veterinary Medicine, Okayama University of Science, Ehime 794-0085, Japan
| | - Keiichi Hisaeda
- Department of Veterinary Medicine, Faculty of Veterinary Medicine, Okayama University of Science, Ehime 794-0085, Japan
| | - Yoichi Inoue
- Department of Veterinary Medicine, Faculty of Veterinary Medicine, Okayama University of Science, Ehime 794-0085, Japan
| | - Toshio Inaba
- Department of Veterinary Medicine, Faculty of Veterinary Medicine, Okayama University of Science, Ehime 794-0085, Japan.
| |
Collapse
|
43
|
Lindner C. Contributing to the prediction of prognosis for treated hepatocellular carcinoma: Imaging aspects that sculpt the future. World J Gastrointest Surg 2024; 16:3377-3380. [PMID: 39575286 PMCID: PMC11577411 DOI: 10.4240/wjgs.v16.i10.3377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2024] [Revised: 08/19/2024] [Accepted: 08/28/2024] [Indexed: 09/27/2024] Open
Abstract
A novel nomogram model to predict the prognosis of hepatocellular carcinoma (HCC) treated with radiofrequency ablation and transarterial chemoembolization was recently published in the World Journal of Gastrointestinal Surgery. This model includes clinical and laboratory factors, but emerging imaging aspects, particularly from magnetic resonance imaging (MRI) and radiomics, could enhance the predictive accuracy thereof. Multiparametric MRI and deep learning radiomics models significantly improve prognostic predictions for the treatment of HCC. Incorporating advanced imaging features, such as peritumoral hypointensity and radiomics scores, alongside clinical factors, can refine prognostic models, aiding in personalized treatment and better predicting outcomes. This letter underscores the importance of integrating novel imaging techniques into prognostic tools to better manage and treat HCC.
Collapse
Affiliation(s)
- Cristian Lindner
- Department of Radiology, Faculty of Medicine, University of Concepcion, Concepcion 4030000, Biobío, Chile
- Department of Radiology, Hospital Regional Guillermo Grant Benavente, Concepcion 4030000, Biobío, Chile
| |
Collapse
|
44
|
Chen PT, Yeh CY, Chang YC, Chen P, Lee CW, Shieh CC, Lin CY, Liu KL. Application of deep learning reconstruction in abdominal magnetic resonance cholangiopancreatography for image quality improvement and acquisition time reduction. J Formos Med Assoc 2024:S0929-6646(24)00493-5. [PMID: 39455401 DOI: 10.1016/j.jfma.2024.10.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 08/25/2024] [Accepted: 10/16/2024] [Indexed: 10/28/2024] Open
Abstract
PURPOSE To compare deep learning (DL)-based and conventional reconstruction through subjective and objective analysis and ascertain whether DL-based reconstruction improves the quality and acquisition speed of clinical abdominal magnetic resonance imaging (MRI). METHODS The 124 patients who underwent abdominal MRI between January and July 2021 were retrospectively studied. For each patient, two-dimensional axial T2-weighted single-shot fast spin-echo MRI images with or without fat saturation were reconstructed using DL-based and conventional methods. The subjective image quality scores and objective metrics, including signal-to-noise ratios (SNRs) and contrast-to-noise ratios (CNRs) of the images were analysed. An explorative analysis was performed to compare 20 patients' MRI images with site routine settings, high-resolution settings and high-speed settings. Paired t tests and Wilcoxon signed-rank tests were used for subjective and objective comparisons. RESULTS A total of 144 patients were evaluated (mean age, 62.2 ± 14.1 years; 83 men). The MRI images reconstructed using DL-based methods had higher SNRs and CNRs than did those reconstructed using conventional methods (all p < 0.01). The subjective scores of the images reconstructed using DL-based methods were higher than those of the images reconstructed using conventional methods (p < 0.01), with significantly lower variation (p < 0.01). Exploratory analysis revealed that the DL-based reconstructions with thin slice thickness and higher temporal resolution had the highest image quality and were associated with the shortest scan times. CONCLUSIONS DL-based reconstruction methods can be used to improve the quality with higher stability and accelerate the acquisition of abdominal MRI.
Collapse
Affiliation(s)
- Po-Ting Chen
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan; Department of Medical Imaging, National Taiwan University Cancer Center and National Taiwan University College of Medicine, Taipei, Taiwan; Department of Medical Imaging, National Taiwan University Hospital Hsinchu Branch, Hsinchu, Taiwan
| | - Chen-Ya Yeh
- Department of Medical Imaging, National Taiwan University Cancer Center and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Yu-Chien Chang
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan; Department of Medical Imaging, National Taiwan University Cancer Center and National Taiwan University College of Medicine, Taipei, Taiwan; Department of Medical Imaging, National Taiwan University Hospital Hsinchu Branch, Hsinchu, Taiwan
| | - Pohua Chen
- Internal Medicine, Chicago Medical School Internal Medicine Residency Program at Northwestern Mchenry Hospital, McHenry, USA
| | | | | | | | - Kao-Lang Liu
- Department of Medical Imaging, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan; Department of Medical Imaging, National Taiwan University Cancer Center and National Taiwan University College of Medicine, Taipei, Taiwan.
| |
Collapse
|
45
|
Kuang Q, Feng B, Xu K, Chen Y, Chen X, Duan X, Lei X, Chen X, Li K, Long W. Multimodal deep learning radiomics model for predicting postoperative progression in solid stage I non-small cell lung cancer. Cancer Imaging 2024; 24:140. [PMID: 39420411 PMCID: PMC11487701 DOI: 10.1186/s40644-024-00783-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 09/30/2024] [Indexed: 10/19/2024] Open
Abstract
PURPOSE To explore the application value of a multimodal deep learning radiomics (MDLR) model in predicting the risk status of postoperative progression in solid stage I non-small cell lung cancer (NSCLC). MATERIALS AND METHODS A total of 459 patients with histologically confirmed solid stage I NSCLC who underwent surgical resection in our institution from January 2014 to September 2019 were reviewed retrospectively. At another medical center, 104 patients were reviewed as an external validation cohort according to the same criteria. A univariate analysis was conducted on the clinicopathological characteristics and subjective CT findings of the progression and non-progression groups. The clinicopathological characteristics and subjective CT findings that exhibited significant differences were used as input variables for the extreme learning machine (ELM) classifier to construct the clinical model. We used the transfer learning strategy to train the ResNet18 model, used the model to extract deep learning features from all CT images, and then used the ELM classifier to classify the deep learning features to obtain the deep learning signature (DLS). A MDLR model incorporating clinicopathological characteristics, subjective CT findings and DLS was constructed. The diagnostic efficiencies of the clinical model, DLS model and MDLR model were evaluated by the area under the curve (AUC). RESULTS Univariate analysis indicated that size (p = 0.004), neuron-specific enolase (NSE) (p = 0.03), carbohydrate antigen 19 - 9 (CA199) (p = 0.003), and pathological stage (p = 0.027) were significantly associated with the progression of solid stage I NSCLC after surgery. Therefore, these clinical characteristics were incorporated into the clinical model to predict the risk of progression in postoperative solid-stage NSCLC patients. A total of 294 deep learning features with nonzero coefficients were selected. The DLS in the progressive group was (0.721 ± 0.371), which was higher than that in the nonprogressive group (0.113 ± 0.350) (p < 0.001). The combination of size、NSE、CA199、pathological stage and DLS demonstrated the superior performance in differentiating postoperative progression status. The AUC of the MDLR model was 0.885 (95% confidence interval [CI]: 0.842-0.927), higher than that of the clinical model (0.675 (95% CI: 0.599-0.752)) and DLS model (0.882 (95% CI: 0.835-0.929)). The DeLong test and decision in curve analysis revealed that the MDLR model was the most predictive and clinically useful model. CONCLUSION MDLR model is effective in predicting the risk of postoperative progression of solid stage I NSCLC, and it is helpful for the treatment and follow-up of solid stage I NSCLC patients.
Collapse
Affiliation(s)
- Qionglian Kuang
- Department of Radiology, Hainan General Hospital, 19#, Xiuhua Road, Xiuying District, Haikou, Hainan Province, 570311, PR China
| | - Bao Feng
- Laboratory of Artificial Intelligence of Biomedicine, Guilin University of Aerospace Technology, Guilin City, Guangxi Province, 541004, China
| | - Kuncai Xu
- Laboratory of Artificial Intelligence of Biomedicine, Guilin University of Aerospace Technology, Guilin City, Guangxi Province, 541004, China
| | - Yehang Chen
- Laboratory of Artificial Intelligence of Biomedicine, Guilin University of Aerospace Technology, Guilin City, Guangxi Province, 541004, China
| | - Xiaojuan Chen
- Department of Radiology, Jiangmen Central Hospital, 23#, North Road, Pengjiang Zone, Jiangmen, Guangdong Province, 529030, PR China
| | - Xiaobei Duan
- Department of Nuclear Medicine, Jiangmen Central Hospital, Jiangmen, Guangdong Province, 529030, PR China
| | - Xiaoyan Lei
- Department of Radiology, Hainan General Hospital, 19#, Xiuhua Road, Xiuying District, Haikou, Hainan Province, 570311, PR China
| | - Xiangmeng Chen
- Department of Radiology, Jiangmen Central Hospital, 23#, North Road, Pengjiang Zone, Jiangmen, Guangdong Province, 529030, PR China.
| | - Kunwei Li
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, Guangdong Province, 519000, PR China.
| | - Wansheng Long
- Department of Radiology, Jiangmen Central Hospital, 23#, North Road, Pengjiang Zone, Jiangmen, Guangdong Province, 529030, PR China.
| |
Collapse
|
46
|
Yao J, Wei L, Hao P, Liu Z, Wang P. Application of artificial intelligence model in pathological staging and prognosis of clear cell renal cell carcinoma. Discov Oncol 2024; 15:545. [PMID: 39390246 PMCID: PMC11467134 DOI: 10.1007/s12672-024-01437-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/20/2024] [Accepted: 10/07/2024] [Indexed: 10/12/2024] Open
Abstract
This study aims to develop a deep learning (DL) model based on whole-slide images (WSIs) to predict the pathological stage of clear cell renal cell carcinoma (ccRCC). The histopathological images of 513 ccRCC patients were downloaded from The Cancer Genome Atlas (TCGA) database and randomly divided into training set and validation set according to the ratio of 8∶2. The CLAM algorithm was used to establish the DL model, and the stability of the model was evaluated in the external validation set. DL features were extracted from the model to construct a prognostic risk model, which was validated in an external dataset. The results showed that the DL model showed excellent prediction ability with an area under the curve (AUC) of 0.875 and an average accuracy score of 0.809, indicating that the model could reliably distinguish ccRCC patients at different stages from histopathological images. In addition, the prognostic risk model constructed by DL characteristics showed that the overall survival rate of patients in the high-risk group was significantly lower than that in the low-risk group (P = 0.003), and AUC values for predicting 1-, 3- and 5-year overall survival rates were 0.68, 0.69 and 0.69, respectively, indicating that the prediction model had high sensitivity and specificity. The results of the validation set are consistent with the above results. Therefore, DL model can accurately predict the pathological stage and prognosis of ccRCC patients, and provide certain reference value for clinical diagnosis.
Collapse
Affiliation(s)
- Jing Yao
- Department of Radiology, Tongji Hospital of Tongji University, Shanghai, 200065, China
- Institute of Medical Imaging Artificial Intelligence, Tongji University School of Medicine, Shanghai, 200065, China
| | - Lai Wei
- Department of Radiology, Tongji Hospital of Tongji University, Shanghai, 200065, China
- Institute of Medical Imaging Artificial Intelligence, Tongji University School of Medicine, Shanghai, 200065, China
| | - Peipei Hao
- Department of Radiology, Tongji Hospital of Tongji University, Shanghai, 200065, China
- Institute of Medical Imaging Artificial Intelligence, Tongji University School of Medicine, Shanghai, 200065, China
| | - Zhongliu Liu
- Department of Radiology, Tongji Hospital of Tongji University, Shanghai, 200065, China
- Institute of Medical Imaging Artificial Intelligence, Tongji University School of Medicine, Shanghai, 200065, China
| | - Peijun Wang
- Department of Radiology, Tongji Hospital of Tongji University, Shanghai, 200065, China.
- Institute of Medical Imaging Artificial Intelligence, Tongji University School of Medicine, Shanghai, 200065, China.
| |
Collapse
|
47
|
Muhaimil A, Pendem S, Sampathilla N, P S P, Nayak K, Chadaga K, Goswami A, M OC, Shirlal A. Role of Artificial intelligence model in prediction of low back pain using T2 weighted MRI of Lumbar spine. F1000Res 2024; 13:1035. [PMID: 39483709 PMCID: PMC11525099 DOI: 10.12688/f1000research.154680.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 10/08/2024] [Indexed: 11/03/2024] Open
Abstract
Background Low back pain (LBP), the primary cause of disability, is the most common musculoskeletal disorder globally and the primary cause of disability. Magnetic resonance imaging (MRI) studies are inconclusive and less sensitive for identifying and classifying patients with LBP. Hence, this study aimed to investigate the role of artificial intelligence (AI) models in the prediction of LBP using T2 weighted MRI image of the lumbar spine. Methods This was a prospective case-control study. A total of 200 MRI patients (100 cases and controls each) referred for lumbar spine and whole spine screening were included. The scans were performed using 3.0 Tesla MRI (United Imaging Healthcare). T2 weighted images of the lumbar spine were segmented to extract radiomic features. Machine learning (ML) models, such as random forest, decision tree, logistic regression, K-nearest neighbors, adaboost, and deep learning methods (DL), such as ResNet and GoogleNet, were used, and performance measures were calculated. Results Our study showed that Random forest and AdaBoost are the most reliable ML models for predicting LBP. Random forest showed high performance with area under curve (AUC) values from 0.83 to 0.88 across all lumbar vertebrae and L2-L3, L3-L4, and L4-L5 intervertebral discs (IVDs), with AUCs of 0.88 the highest at L5-S1 IVD (0.92). Adaboost demonstrated high performance at the L2-L5 vertebrae with AUC values of 0.82 to 0.90, with the highest AUC (0.97) at the L5-S1 IVD. Among the DL models, GoogleNet outperformed the other models at 30 epochs with an accuracy of 0.85, followed by ResNet 18 (30 epochs) with an accuracy of 0.84. Conclusion The study demonstrated that ML and DL models can effectively predict LBP from MRI T2 weighted image of the lumbar spine. ML and DL models could also enhance the diagnostic accuracy of LBP, potentially leading to better patient management and outcomes.
Collapse
Affiliation(s)
- Ali Muhaimil
- Department of Medical Imaging Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Karnataka, Manipal, 576104, India
| | - Saikiran Pendem
- Department of Medical Imaging Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Karnataka, Manipal, 576104, India
| | - Niranjana Sampathilla
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Priya P S
- Department of Radio Diagnosis and Imaging, Kasturba Medical College, Manipal Academy of Higher Education, Karnataka, Manipal, 576104, India
| | - Kaushik Nayak
- Department of Medical Imaging Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Karnataka, Manipal, 576104, India
| | - Krishnaraj Chadaga
- Department of Computer Science and Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Anushree Goswami
- Department of Biomedical Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal, 576104, India
| | - Obhuli Chandran M
- Department of Medical Imaging Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Karnataka, Manipal, 576104, India
| | - Abhijit Shirlal
- Department of Medical Imaging Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Karnataka, Manipal, 576104, India
| |
Collapse
|
48
|
Archer H, Xia S, Salzlechner C, Götz C, Chhabra A. Artificial Intelligence in Musculoskeletal Radiographs: Scoliosis, Hip, Limb Length, and Lower Extremity Alignment Measurements. Semin Roentgenol 2024; 59:510-517. [PMID: 39490043 DOI: 10.1053/j.ro.2024.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2024] [Revised: 05/01/2024] [Accepted: 06/03/2024] [Indexed: 11/05/2024]
Affiliation(s)
- Holden Archer
- UT Southwestern Medical Center, Department of Orthopaedic Surgery, 5323 Harry Hines Blvd, Dallas, TX 75390
| | - Shuda Xia
- UT Southwestern Medical Center, Department of Radiology, 5323 Harry Hines Blvd, Dallas, TX 75390
| | | | - Christoph Götz
- ImageBiopsy Lab, Inc., Zehetnergasse 6/2/2, 1140, Wien, Vienna, Austria
| | - Avneesh Chhabra
- UT Southwestern Medical Center, Department of Orthopaedic Surgery, 5323 Harry Hines Blvd, Dallas, TX 75390; UT Southwestern Medical Center, Department of Radiology, 5323 Harry Hines Blvd, Dallas, TX 75390; Adjunct Faculty Johns Hopkins University, Department of Radiology, Maryland, USA; Department of Radiology, Walton Center of Neurosciences, Liverpool, UK.
| |
Collapse
|
49
|
Shen Y, Zhu C, Chu B, Song J, Geng Y, Li J, Liu B, Wu X. Evaluation of the clinical application value of artificial intelligence in diagnosing head and neck aneurysms. BMC Med Imaging 2024; 24:261. [PMID: 39354383 PMCID: PMC11446065 DOI: 10.1186/s12880-024-01436-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Accepted: 09/18/2024] [Indexed: 10/03/2024] Open
Abstract
OBJECTIVE To evaluate the performance of a semi-automated artificial intelligence (AI) software program (CerebralDoc® system) in aneurysm detection and morphological measurement. METHODS In this study, 354 cases of computed tomographic angiography (CTA) were retrospectively collected in our hospital. Among them, 280 cases were diagnosed with aneurysms by either digital subtraction angiography (DSA) and CTA (DSA group, n = 102), or CTA-only (non-DSA group, n = 178). The presence or absence of aneurysms, as well as their location and related morphological features determined by AI were evaluated using DSA and radiologist findings. Besides, post-processing image quality from AI and radiologists were also rated and compared. RESULTS In the DSA group, AI achieved a sensitivity of 88.24% and an accuracy of 81.97%, whereas radiologists achieved a sensitivity of 95.10% and an accuracy of 84.43%, using DSA results as the gold standard. The AI in the non-DSA group achieved 81.46% sensitivity and 76.29% accuracy, as per the radiologists' findings. The comparison of position consistency results showed better performance under loose criteria than strict criteria. In terms of morphological characteristics, both the DSA and the non-DSA groups agreed well with the diagnostic results for neck width and maximum diameter, demonstrating excellent ICC reliability exceeding 0.80. The AI-generated images exhibited superior quality compared to the standard software for post-processing, while also demonstrating a significantly reduced processing time. CONCLUSIONS The AI-based aneurysm detection rate demonstrates a commendable performance, while the extracted morphological parameters exhibit a remarkable consistency with those assessed by radiologists, thereby showcasing significant potential for clinical application.
Collapse
Affiliation(s)
- Yi Shen
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui Province, 230022, China
| | - Chao Zhu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui Province, 230022, China
- Department of Radiology, The First Affiliated Hospital of Wannan Medical College, Wuhu, Anhui Province, 241000, China
| | - Bingqian Chu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui Province, 230022, China
| | - Jian Song
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui Province, 230022, China
| | - Yayuan Geng
- Shukun (Beijing) Network Technology Co, Ltd, Jinhui Building, Qiyang Road, Beijing, 100102, China
| | - Jianying Li
- CT Research Center, GE Healthcare China, Shanghai, 210000, China
| | - Bin Liu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui Province, 230022, China.
| | - Xingwang Wu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, Anhui Province, 230022, China.
| |
Collapse
|
50
|
Reis EP, Blankemeier L, Zambrano Chaves JM, Jensen MEK, Yao S, Truyts CAM, Willis MH, Adams S, Amaro E, Boutin RD, Chaudhari AS. Automated abdominal CT contrast phase detection using an interpretable and open-source artificial intelligence algorithm. Eur Radiol 2024; 34:6680-6687. [PMID: 38683384 PMCID: PMC11456344 DOI: 10.1007/s00330-024-10769-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 03/11/2024] [Accepted: 03/20/2024] [Indexed: 05/01/2024]
Abstract
OBJECTIVES To develop and validate an open-source artificial intelligence (AI) algorithm to accurately detect contrast phases in abdominal CT scans. MATERIALS AND METHODS Retrospective study aimed to develop an AI algorithm trained on 739 abdominal CT exams from 2016 to 2021, from 200 unique patients, covering 1545 axial series. We performed segmentation of five key anatomic structures-aorta, portal vein, inferior vena cava, renal parenchyma, and renal pelvis-using TotalSegmentator, a deep learning-based tool for multi-organ segmentation, and a rule-based approach to extract the renal pelvis. Radiomics features were extracted from the anatomical structures for use in a gradient-boosting classifier to identify four contrast phases: non-contrast, arterial, venous, and delayed. Internal and external validation was performed using the F1 score and other classification metrics, on the external dataset "VinDr-Multiphase CT". RESULTS The training dataset consisted of 172 patients (mean age, 70 years ± 8, 22% women), and the internal test set included 28 patients (mean age, 68 years ± 8, 14% women). In internal validation, the classifier achieved an accuracy of 92.3%, with an average F1 score of 90.7%. During external validation, the algorithm maintained an accuracy of 90.1%, with an average F1 score of 82.6%. Shapley feature attribution analysis indicated that renal and vascular radiodensity values were the most important for phase classification. CONCLUSION An open-source and interpretable AI algorithm accurately detects contrast phases in abdominal CT scans, with high accuracy and F1 scores in internal and external validation, confirming its generalization capability. CLINICAL RELEVANCE STATEMENT Contrast phase detection in abdominal CT scans is a critical step for downstream AI applications, deploying algorithms in the clinical setting, and for quantifying imaging biomarkers, ultimately allowing for better diagnostics and increased access to diagnostic imaging. KEY POINTS Digital Imaging and Communications in Medicine labels are inaccurate for determining the abdominal CT scan phase. AI provides great help in accurately discriminating the contrast phase. Accurate contrast phase determination aids downstream AI applications and biomarker quantification.
Collapse
Affiliation(s)
- Eduardo Pontes Reis
- Department of Radiology, Stanford University, Stanford, CA, USA.
- Center for Artificial Intelligence in Medicine & Imaging (AIMI), Stanford University, Stanford, CA, USA.
- Hospital Israelita Albert Einstein, Sao Paulo, Brazil.
| | - Louis Blankemeier
- Department of Electrical Engineering, Stanford University, Stanford, CA, USA
| | - Juan Manuel Zambrano Chaves
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| | | | - Sally Yao
- Department of Radiology, Stanford University, Stanford, CA, USA
| | | | - Marc H Willis
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Scott Adams
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Edson Amaro
- Hospital Israelita Albert Einstein, Sao Paulo, Brazil
| | - Robert D Boutin
- Department of Radiology, Stanford University, Stanford, CA, USA
| | - Akshay S Chaudhari
- Department of Radiology, Stanford University, Stanford, CA, USA
- Department of Biomedical Data Science, Stanford University, Stanford, CA, USA
| |
Collapse
|